A significant portion of Landslide Early Warning Systems (LEWS) relies on the definition of operational thresholds and the monitoring of cumulative rainfall for alert issuance. These thresholds can be obtained in vari...A significant portion of Landslide Early Warning Systems (LEWS) relies on the definition of operational thresholds and the monitoring of cumulative rainfall for alert issuance. These thresholds can be obtained in various ways, but most often they are based on previous landslide data. This approach introduces several limitations. For instance, there is a requirement for the location to have been previously monitored in some way to have this type of information recorded. Another significant limitation is the need for information regarding the location and timing of incidents. Despite the current ease of obtaining location information (GPS, drone images, etc.), the timing of the event remains challenging to ascertain for a considerable portion of landslide data. Concerning rainfall monitoring, there are multiple ways to consider it, for instance, examining accumulations over various intervals (1 h, 6 h, 24 h, 72 h), as well as in the calculation of effective rainfall, which represents the precipitation that actually infiltrates the soil. However, in the vast majority of cases, both the thresholds and the rain monitoring approach are defined manually and subjectively, relying on the operators’ experience. This makes the process labor-intensive and time-consuming, hindering the establishment of a truly standardized and rapidly scalable methodology on a large scale. In this work, we propose a Landslides Early Warning System (LEWS) based on the concept of rainfall half-life and the determination of thresholds using Cluster Analysis and data inversion. The system is designed to be applied in extensive monitoring networks, such as the one utilized by Cemaden, Brazil’s National Center for Monitoring and Early Warning of Natural Disasters.展开更多
This paper investigates the design essence of Chinese classical private gardens,integrating their design elements and fundamental principles.It systematically analyzes the unique characteristics and differences among ...This paper investigates the design essence of Chinese classical private gardens,integrating their design elements and fundamental principles.It systematically analyzes the unique characteristics and differences among classical private gardens in the Northern,Jiangnan,and Lingnan regions.The study examines nine classical private gardens from Northern China,Jiangnan,and Lingnan by utilizing the advanced tool of principal component cluster analysis.Based on literature analysis and field research,273 variables were selected for principal component analysis,from which four components with higher contribution rates were chosen for further study.Subsequently,we employed clustering analysis techniques to compare the differences among the three types of gardens.The results reveal that the first principal component effectively highlights the differences between Jiangnan and Lingnan private gardens.The second principal component serves as the key to defining the types of Northern private gardens and distinguishing them from the other two types,and the third principal component indicates that Lingnan private gardens can be categorized into two distinct types as well.展开更多
The k-means algorithm is a popular data clustering technique due to its speed and simplicity. However, it is susceptible to issues such as sensitivity to the chosen seeds, and inaccurate clusters due to poor initial s...The k-means algorithm is a popular data clustering technique due to its speed and simplicity. However, it is susceptible to issues such as sensitivity to the chosen seeds, and inaccurate clusters due to poor initial seeds, particularly in complex datasets or datasets with non-spherical clusters. In this paper, a Comprehensive K-Means Clustering algorithm is presented, in which multiple trials of k-means are performed on a given dataset. The clustering results from each trial are transformed into a five-dimensional data point, containing the scope values of the x and y coordinates of the clusters along with the number of points within that cluster. A graph is then generated displaying the configuration of these points using Principal Component Analysis (PCA), from which we can observe and determine the common clustering patterns in the dataset. The robustness and strength of these patterns are then examined by observing the variance of the results of each trial, wherein a different subset of the data keeping a certain percentage of original data points is clustered. By aggregating information from multiple trials, we can distinguish clusters that consistently emerge across different runs from those that are more sensitive or unlikely, hence deriving more reliable conclusions about the underlying structure of complex datasets. Our experiments show that our algorithm is able to find the most common associations between different dimensions of data over multiple trials, often more accurately than other algorithms, as well as measure stability of these clusters, an ability that other k-means algorithms lack.展开更多
The goal of this study was to optimize the constitutive parameters of foundation soils using a k-means algorithm with clustering analysis. A database was collected from unconfined compression tests, Proctor tests and ...The goal of this study was to optimize the constitutive parameters of foundation soils using a k-means algorithm with clustering analysis. A database was collected from unconfined compression tests, Proctor tests and grain distribution tests of soils taken from three different types of foundation pits: raft foundations, partial raft foundations and strip foundations. k-means algorithm with clustering analysis was applied to determine the most appropriate foundation type given the un- confined compression strengths and other parameters of the different soils.展开更多
受限于自然条件,光伏出力具有很强的随机性。为准确评估轨道交通基础设施分布式光伏发电的光伏出力特性,提出一种基于改进K-means聚类算法的轨道交通基础设施分布式光伏发电典型场景生成方法,并基于此进行光伏出力特性分析。首先,基于...受限于自然条件,光伏出力具有很强的随机性。为准确评估轨道交通基础设施分布式光伏发电的光伏出力特性,提出一种基于改进K-means聚类算法的轨道交通基础设施分布式光伏发电典型场景生成方法,并基于此进行光伏出力特性分析。首先,基于分布式光伏发电设施以及气象数据,利用PVsyst软件模拟光伏发电出力数据。然后,针对基本K-means聚类算法聚类参数和初始聚类中心盲目性高的问题,结合聚类有效性指标(Density based index,DBI)和层次聚类对其进行改进并利用改进K-means聚类算法生成光伏典型日出力场景。最后,基于华中地区某地轨道交通基础设施分布式光伏系统对所提方法的有效性和优越性进行验证,并通过定性和定量分析各典型场景的出力特性揭示轨道交通基础设施分布式光伏出力的规律和特点。展开更多
Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-clus...Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-cluster similarity and low inter-cluster similarity. Clustering techniques are applied in different domains to predict future trends of available data and its uses for the real world. This research work is carried out to find the performance of two of the most delegated, partition based clustering algorithms namely k-Means and k-Medoids. A state of art analysis of these two algorithms is implemented and performance is analyzed based on their clustering result quality by means of its execution time and other components. Telecommunication data is the source data for this analysis. The connection oriented broadband data is given as input to find the clustering quality of the algorithms. Distance between the server locations and their connection is considered for clustering. Execution time for each algorithm is analyzed and the results are compared with one another. Results found in comparison study are satisfactory for the chosen application.展开更多
With the advent of the era of big data and the development and construction of smart campuses,the campus is gradually moving towards digitalization,networking and informationization.The campus card is an important par...With the advent of the era of big data and the development and construction of smart campuses,the campus is gradually moving towards digitalization,networking and informationization.The campus card is an important part of the construction of a smart campus,and the massive data it generates can indirectly reflect the living conditions of students at school.In the face of the campus card,how to quickly and accurately obtain the information required by users from the massive data sets has become an urgent problem that needs to be solved.This paper proposes a data mining algorithm based on K-Means clustering and time series.It analyzes the consumption data of a college student’s card to deeply mine and analyze the daily life consumer behavior habits of students,and to make an accurate judgment on the specific life consumer behavior.The algorithm proposed in this paper provides a practical reference for the construction of smart campuses in universities,and has important theoretical and application values.展开更多
In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste...In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.展开更多
The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every in...The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every individual. In this context, it is essential to find a balance between the protection of privacy and the safeguarding of public health, using tools that guarantee transparency and consent to the processing of data by the population. This work, starting from a pilot investigation conducted in the Polyclinic of Bari as part of the Horizon Europe Seeds project entitled “Multidisciplinary analysis of technological tracing models of contagion: the protection of rights in the management of health data”, has the objective of promoting greater patient awareness regarding the processing of their health data and the protection of privacy. The methodology used the PHICAT (Personal Health Information Competence Assessment Tool) as a tool and, through the administration of a questionnaire, the aim was to evaluate the patients’ ability to express their consent to the release and processing of health data. The results that emerged were analyzed in relation to the 4 domains in which the process is divided which allows evaluating the patients’ ability to express a conscious choice and, also, in relation to the socio-demographic and clinical characteristics of the patients themselves. This study can contribute to understanding patients’ ability to give their consent and improve information regarding the management of health data by increasing confidence in granting the use of their data for research and clinical management.展开更多
In this paper, CiteSpace, a bibliometrics software, was adopted to collect research papers published on the Web of Science, which are relevant to biological model and effluent quality prediction in activated sludge pr...In this paper, CiteSpace, a bibliometrics software, was adopted to collect research papers published on the Web of Science, which are relevant to biological model and effluent quality prediction in activated sludge process in the wastewater treatment. By the way of trend map, keyword knowledge map, and co-cited knowledge map, specific visualization analysis and identification of the authors, institutions and regions were concluded. Furthermore, the topics and hotspots of water quality prediction in activated sludge process through the literature-co-citation-based cluster analysis and literature citation burst analysis were also determined, which not only reflected the historical evolution progress to a certain extent, but also provided the direction and insight of the knowledge structure of water quality prediction and activated sludge process for future research.展开更多
Various types of plasma events emerge in specific parameter ranges and exhibit similar characteristics in diagnostic signals,which can be applied to identify these events.A semisupervised machine learning algorithm,th...Various types of plasma events emerge in specific parameter ranges and exhibit similar characteristics in diagnostic signals,which can be applied to identify these events.A semisupervised machine learning algorithm,the k-means clustering algorithm,is utilized to investigate and identify plasma events in the J-TEXT plasma.This method can cluster diverse plasma events with homogeneous features,and then these events can be identified if given few manually labeled examples based on physical understanding.A survey of clustered events reveals that the k-means algorithm can make plasma events(rotating tearing mode,sawtooth oscillations,and locked mode)gathering in Euclidean space composed of multi-dimensional diagnostic data,like soft x-ray emission intensity,edge toroidal rotation velocity,the Mirnov signal amplitude and so on.Based on the cluster analysis results,an approximate analytical model is proposed to rapidly identify plasma events in the J-TEXT plasma.The cluster analysis method is conducive to data markers of massive diagnostic data.展开更多
The analysis of microstates in EEG signals is a crucial technique for understanding the spatiotemporal dynamics of brain electrical activity.Traditional methods such as Atomic Agglomerative Hierarchical Clustering(AAH...The analysis of microstates in EEG signals is a crucial technique for understanding the spatiotemporal dynamics of brain electrical activity.Traditional methods such as Atomic Agglomerative Hierarchical Clustering(AAHC),K-means clustering,Principal Component Analysis(PCA),and Independent Component Analysis(ICA)are limited by a fixed number of microstate maps and insufficient capability in cross-task feature extraction.Tackling these limitations,this study introduces a Global Map Dissimilarity(GMD)-driven density canopy K-means clustering algorithm.This innovative approach autonomously determines the optimal number of EEG microstate topographies and employs Gaussian kernel density estimation alongside the GMD index for dynamic modeling of EEG data.Utilizing this advanced algorithm,the study analyzes the Motor Imagery(MI)dataset from the GigaScience database,GigaDB.The findings reveal six distinct microstates during actual right-hand movement and five microstates across other task conditions,with microstate C showing superior performance in all task states.During imagined movement,microstate A was significantly enhanced.Comparison with existing algorithms indicates a significant improvement in clustering performance by the refined method,with an average Calinski-Harabasz Index(CHI)of 35517.29 and a Davis-Bouldin Index(DBI)average of 2.57.Furthermore,an information-theoretical analysis of the microstate sequences suggests that imagined movement exhibits higher complexity and disorder than actual movement.By utilizing the extracted microstate sequence parameters as features,the improved algorithm achieved a classification accuracy of 98.41%in EEG signal categorization for motor imagery.A performance of 78.183%accuracy was achieved in a four-class motor imagery task on the BCI-IV-2a dataset.These results demonstrate the potential of the advanced algorithm in microstate analysis,offering a more effective tool for a deeper understanding of the spatiotemporal features of EEG signals.展开更多
As critical conduits for the dissemination of online public opinion,social media platforms offer a timely and effective means for managing emergencies during major disasters,such as earthquakes.This study focuses on t...As critical conduits for the dissemination of online public opinion,social media platforms offer a timely and effective means for managing emergencies during major disasters,such as earthquakes.This study focuses on the analysis of online public opinions following the Maduo M7.4 earthquake in Qinghai Province and the Yangbi M6.4 earthquake in Yunnan Province.By collecting,cleaning,and organizing post-earthquake Sina Weibo(short for Weibo)data,we employed the Latent Dirichlet Allocation(LDA)model to extract information pertinent to public opinion on these earthquakes.This analysis included a comparison of the nature and temporal evolution of online public opinions related to both events.An emotion analysis,utilizing an emotion dictionary,categorized the emotional content of post-earthquake Weibo posts,facilitating a comparative study of the characteristics and temporal trends of online public emotions following the earthquakes.The findings were visualized using Geographic Information System(GIS)techniques.The analysis revealed certain commonalities in online public opinion following both earthquakes.Notably,the peak of online engagement occurred within the first 24 hours post-earthquake,with a rapid decline observed between 24 to 48 hours thereafter.The variation in popularity of online public opinion was linked to aftershock occurrences.Adjusted for population factors,online engagement in areas surrounding the earthquake sites and in Sichuan Province was significantly high.Initially dominated by feelings of“fear”and“surprise”,the public sentiment shifted towards a more positive outlook with the onset of rescue operations.However,distinctions in the online public response to each earthquake were also noted.Following the Yangbi earthquake,Yunnan Province reported the highest number of Weibo posts nationwide;in contrast,Qinghai Province ranked third post-Maduo earthquake,attributable to its smaller population size and extensive damage to communication infrastructure.This research offers a methodological approach for the analysis of online public opinion related to earthquakes,providing insights for the enhancement of post-disaster emergency management and public mental health support.展开更多
In the past 30 years, Chinese enterprises have been a hot topic of discussion and concern among the general public in terms of economic and social status, ownership structure, business mechanism, and management level....In the past 30 years, Chinese enterprises have been a hot topic of discussion and concern among the general public in terms of economic and social status, ownership structure, business mechanism, and management level. Solving the problem of employment for the people is an important prerequisite for their peaceful living and work, as well as a prerequisite and foundation for building a harmonious society. The employment situation of private enterprises has always been of great concern to the outside world, and these two major jobs have always occupied an important position in the employment field of China that cannot be ignored. With the establishment of the market economy system, individual and private enterprises have become important components of the socialist economy, making significant contributions to economic development and social progress. The rapid development of China’s economy, on the one hand, is the embodiment of the superiority of China’s socialist market economic system, and on the other hand, it is the role of the tertiary industry and private enterprises in promoting the national economy. Since the 1990s, China’s private enterprises have become a new economic growth point for local and even national countries, and are one of the important ways to arrange employment and achieve social stability. This paper studies the employment of private enterprises and individuals from the perspective of statistics, extracts relevant data from China statistical Yearbook, uses the relevant knowledge of statistics to process the data, obtains the conclusion and puts forward relevant constructive suggestions.展开更多
Internet services and web-based applications play pivotal roles in various sensitive domains, encompassing e-commerce, e-learning, e-healthcare, and e-payment. However, safeguarding these services poses a significant ...Internet services and web-based applications play pivotal roles in various sensitive domains, encompassing e-commerce, e-learning, e-healthcare, and e-payment. However, safeguarding these services poses a significant challenge, as the need for robust security measures becomes increasingly imperative. This paper presented an innovative method based on differential analyses to detect abrupt changes in network traffic characteristics. The core concept revolves around identifying abrupt alterations in certain characteristics such as input/output volume, the number of TCP connections, or DNS queries—within the analyzed traffic. Initially, the traffic is segmented into distinct sequences of slices, followed by quantifying specific characteristics for each slice. Subsequently, the distance between successive values of these measured characteristics is computed and clustered to detect sudden changes. To accomplish its objectives, the approach combined several techniques, including propositional logic, distance metrics (e.g., Kullback-Leibler Divergence), and clustering algorithms (e.g., K-means). When applied to two distinct datasets, the proposed approach demonstrates exceptional performance, achieving detection rates of up to 100%.展开更多
文摘A significant portion of Landslide Early Warning Systems (LEWS) relies on the definition of operational thresholds and the monitoring of cumulative rainfall for alert issuance. These thresholds can be obtained in various ways, but most often they are based on previous landslide data. This approach introduces several limitations. For instance, there is a requirement for the location to have been previously monitored in some way to have this type of information recorded. Another significant limitation is the need for information regarding the location and timing of incidents. Despite the current ease of obtaining location information (GPS, drone images, etc.), the timing of the event remains challenging to ascertain for a considerable portion of landslide data. Concerning rainfall monitoring, there are multiple ways to consider it, for instance, examining accumulations over various intervals (1 h, 6 h, 24 h, 72 h), as well as in the calculation of effective rainfall, which represents the precipitation that actually infiltrates the soil. However, in the vast majority of cases, both the thresholds and the rain monitoring approach are defined manually and subjectively, relying on the operators’ experience. This makes the process labor-intensive and time-consuming, hindering the establishment of a truly standardized and rapidly scalable methodology on a large scale. In this work, we propose a Landslides Early Warning System (LEWS) based on the concept of rainfall half-life and the determination of thresholds using Cluster Analysis and data inversion. The system is designed to be applied in extensive monitoring networks, such as the one utilized by Cemaden, Brazil’s National Center for Monitoring and Early Warning of Natural Disasters.
文摘This paper investigates the design essence of Chinese classical private gardens,integrating their design elements and fundamental principles.It systematically analyzes the unique characteristics and differences among classical private gardens in the Northern,Jiangnan,and Lingnan regions.The study examines nine classical private gardens from Northern China,Jiangnan,and Lingnan by utilizing the advanced tool of principal component cluster analysis.Based on literature analysis and field research,273 variables were selected for principal component analysis,from which four components with higher contribution rates were chosen for further study.Subsequently,we employed clustering analysis techniques to compare the differences among the three types of gardens.The results reveal that the first principal component effectively highlights the differences between Jiangnan and Lingnan private gardens.The second principal component serves as the key to defining the types of Northern private gardens and distinguishing them from the other two types,and the third principal component indicates that Lingnan private gardens can be categorized into two distinct types as well.
文摘The k-means algorithm is a popular data clustering technique due to its speed and simplicity. However, it is susceptible to issues such as sensitivity to the chosen seeds, and inaccurate clusters due to poor initial seeds, particularly in complex datasets or datasets with non-spherical clusters. In this paper, a Comprehensive K-Means Clustering algorithm is presented, in which multiple trials of k-means are performed on a given dataset. The clustering results from each trial are transformed into a five-dimensional data point, containing the scope values of the x and y coordinates of the clusters along with the number of points within that cluster. A graph is then generated displaying the configuration of these points using Principal Component Analysis (PCA), from which we can observe and determine the common clustering patterns in the dataset. The robustness and strength of these patterns are then examined by observing the variance of the results of each trial, wherein a different subset of the data keeping a certain percentage of original data points is clustered. By aggregating information from multiple trials, we can distinguish clusters that consistently emerge across different runs from those that are more sensitive or unlikely, hence deriving more reliable conclusions about the underlying structure of complex datasets. Our experiments show that our algorithm is able to find the most common associations between different dimensions of data over multiple trials, often more accurately than other algorithms, as well as measure stability of these clusters, an ability that other k-means algorithms lack.
文摘The goal of this study was to optimize the constitutive parameters of foundation soils using a k-means algorithm with clustering analysis. A database was collected from unconfined compression tests, Proctor tests and grain distribution tests of soils taken from three different types of foundation pits: raft foundations, partial raft foundations and strip foundations. k-means algorithm with clustering analysis was applied to determine the most appropriate foundation type given the un- confined compression strengths and other parameters of the different soils.
文摘受限于自然条件,光伏出力具有很强的随机性。为准确评估轨道交通基础设施分布式光伏发电的光伏出力特性,提出一种基于改进K-means聚类算法的轨道交通基础设施分布式光伏发电典型场景生成方法,并基于此进行光伏出力特性分析。首先,基于分布式光伏发电设施以及气象数据,利用PVsyst软件模拟光伏发电出力数据。然后,针对基本K-means聚类算法聚类参数和初始聚类中心盲目性高的问题,结合聚类有效性指标(Density based index,DBI)和层次聚类对其进行改进并利用改进K-means聚类算法生成光伏典型日出力场景。最后,基于华中地区某地轨道交通基础设施分布式光伏系统对所提方法的有效性和优越性进行验证,并通过定性和定量分析各典型场景的出力特性揭示轨道交通基础设施分布式光伏出力的规律和特点。
文摘Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-cluster similarity and low inter-cluster similarity. Clustering techniques are applied in different domains to predict future trends of available data and its uses for the real world. This research work is carried out to find the performance of two of the most delegated, partition based clustering algorithms namely k-Means and k-Medoids. A state of art analysis of these two algorithms is implemented and performance is analyzed based on their clustering result quality by means of its execution time and other components. Telecommunication data is the source data for this analysis. The connection oriented broadband data is given as input to find the clustering quality of the algorithms. Distance between the server locations and their connection is considered for clustering. Execution time for each algorithm is analyzed and the results are compared with one another. Results found in comparison study are satisfactory for the chosen application.
基金Science and Technology Project of Guizhou Province of China(Grant QKHJC[2019]1403)and(Grant QKHJC[2019]1041)Guizhou Province Colleges and Universities Top Technology Talent Support Program(Grant QJHKY[2016]068).
文摘With the advent of the era of big data and the development and construction of smart campuses,the campus is gradually moving towards digitalization,networking and informationization.The campus card is an important part of the construction of a smart campus,and the massive data it generates can indirectly reflect the living conditions of students at school.In the face of the campus card,how to quickly and accurately obtain the information required by users from the massive data sets has become an urgent problem that needs to be solved.This paper proposes a data mining algorithm based on K-Means clustering and time series.It analyzes the consumption data of a college student’s card to deeply mine and analyze the daily life consumer behavior habits of students,and to make an accurate judgment on the specific life consumer behavior.The algorithm proposed in this paper provides a practical reference for the construction of smart campuses in universities,and has important theoretical and application values.
文摘In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.
文摘The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every individual. In this context, it is essential to find a balance between the protection of privacy and the safeguarding of public health, using tools that guarantee transparency and consent to the processing of data by the population. This work, starting from a pilot investigation conducted in the Polyclinic of Bari as part of the Horizon Europe Seeds project entitled “Multidisciplinary analysis of technological tracing models of contagion: the protection of rights in the management of health data”, has the objective of promoting greater patient awareness regarding the processing of their health data and the protection of privacy. The methodology used the PHICAT (Personal Health Information Competence Assessment Tool) as a tool and, through the administration of a questionnaire, the aim was to evaluate the patients’ ability to express their consent to the release and processing of health data. The results that emerged were analyzed in relation to the 4 domains in which the process is divided which allows evaluating the patients’ ability to express a conscious choice and, also, in relation to the socio-demographic and clinical characteristics of the patients themselves. This study can contribute to understanding patients’ ability to give their consent and improve information regarding the management of health data by increasing confidence in granting the use of their data for research and clinical management.
文摘In this paper, CiteSpace, a bibliometrics software, was adopted to collect research papers published on the Web of Science, which are relevant to biological model and effluent quality prediction in activated sludge process in the wastewater treatment. By the way of trend map, keyword knowledge map, and co-cited knowledge map, specific visualization analysis and identification of the authors, institutions and regions were concluded. Furthermore, the topics and hotspots of water quality prediction in activated sludge process through the literature-co-citation-based cluster analysis and literature citation burst analysis were also determined, which not only reflected the historical evolution progress to a certain extent, but also provided the direction and insight of the knowledge structure of water quality prediction and activated sludge process for future research.
基金supported by the National Magnetic Confinement Fusion Science Program of China(Nos.2018YFE0301104 and 2018YFE0301100)National Natural Science Foundation of China(Nos.12075096 and 51821005)。
文摘Various types of plasma events emerge in specific parameter ranges and exhibit similar characteristics in diagnostic signals,which can be applied to identify these events.A semisupervised machine learning algorithm,the k-means clustering algorithm,is utilized to investigate and identify plasma events in the J-TEXT plasma.This method can cluster diverse plasma events with homogeneous features,and then these events can be identified if given few manually labeled examples based on physical understanding.A survey of clustered events reveals that the k-means algorithm can make plasma events(rotating tearing mode,sawtooth oscillations,and locked mode)gathering in Euclidean space composed of multi-dimensional diagnostic data,like soft x-ray emission intensity,edge toroidal rotation velocity,the Mirnov signal amplitude and so on.Based on the cluster analysis results,an approximate analytical model is proposed to rapidly identify plasma events in the J-TEXT plasma.The cluster analysis method is conducive to data markers of massive diagnostic data.
基金funded by National Nature Science Foundation of China,Yunnan Funda-Mental Research Projects,Special Project of Guangdong Province in Key Fields of Ordinary Colleges and Universities and Chaozhou Science and Technology Plan Project of Funder Grant Numbers 82060329,202201AT070108,2023ZDZX2038 and 202201GY01.
文摘The analysis of microstates in EEG signals is a crucial technique for understanding the spatiotemporal dynamics of brain electrical activity.Traditional methods such as Atomic Agglomerative Hierarchical Clustering(AAHC),K-means clustering,Principal Component Analysis(PCA),and Independent Component Analysis(ICA)are limited by a fixed number of microstate maps and insufficient capability in cross-task feature extraction.Tackling these limitations,this study introduces a Global Map Dissimilarity(GMD)-driven density canopy K-means clustering algorithm.This innovative approach autonomously determines the optimal number of EEG microstate topographies and employs Gaussian kernel density estimation alongside the GMD index for dynamic modeling of EEG data.Utilizing this advanced algorithm,the study analyzes the Motor Imagery(MI)dataset from the GigaScience database,GigaDB.The findings reveal six distinct microstates during actual right-hand movement and five microstates across other task conditions,with microstate C showing superior performance in all task states.During imagined movement,microstate A was significantly enhanced.Comparison with existing algorithms indicates a significant improvement in clustering performance by the refined method,with an average Calinski-Harabasz Index(CHI)of 35517.29 and a Davis-Bouldin Index(DBI)average of 2.57.Furthermore,an information-theoretical analysis of the microstate sequences suggests that imagined movement exhibits higher complexity and disorder than actual movement.By utilizing the extracted microstate sequence parameters as features,the improved algorithm achieved a classification accuracy of 98.41%in EEG signal categorization for motor imagery.A performance of 78.183%accuracy was achieved in a four-class motor imagery task on the BCI-IV-2a dataset.These results demonstrate the potential of the advanced algorithm in microstate analysis,offering a more effective tool for a deeper understanding of the spatiotemporal features of EEG signals.
基金funded by the Science Research Project of Hebei Education Department(No.BJK2023088).
文摘As critical conduits for the dissemination of online public opinion,social media platforms offer a timely and effective means for managing emergencies during major disasters,such as earthquakes.This study focuses on the analysis of online public opinions following the Maduo M7.4 earthquake in Qinghai Province and the Yangbi M6.4 earthquake in Yunnan Province.By collecting,cleaning,and organizing post-earthquake Sina Weibo(short for Weibo)data,we employed the Latent Dirichlet Allocation(LDA)model to extract information pertinent to public opinion on these earthquakes.This analysis included a comparison of the nature and temporal evolution of online public opinions related to both events.An emotion analysis,utilizing an emotion dictionary,categorized the emotional content of post-earthquake Weibo posts,facilitating a comparative study of the characteristics and temporal trends of online public emotions following the earthquakes.The findings were visualized using Geographic Information System(GIS)techniques.The analysis revealed certain commonalities in online public opinion following both earthquakes.Notably,the peak of online engagement occurred within the first 24 hours post-earthquake,with a rapid decline observed between 24 to 48 hours thereafter.The variation in popularity of online public opinion was linked to aftershock occurrences.Adjusted for population factors,online engagement in areas surrounding the earthquake sites and in Sichuan Province was significantly high.Initially dominated by feelings of“fear”and“surprise”,the public sentiment shifted towards a more positive outlook with the onset of rescue operations.However,distinctions in the online public response to each earthquake were also noted.Following the Yangbi earthquake,Yunnan Province reported the highest number of Weibo posts nationwide;in contrast,Qinghai Province ranked third post-Maduo earthquake,attributable to its smaller population size and extensive damage to communication infrastructure.This research offers a methodological approach for the analysis of online public opinion related to earthquakes,providing insights for the enhancement of post-disaster emergency management and public mental health support.
文摘In the past 30 years, Chinese enterprises have been a hot topic of discussion and concern among the general public in terms of economic and social status, ownership structure, business mechanism, and management level. Solving the problem of employment for the people is an important prerequisite for their peaceful living and work, as well as a prerequisite and foundation for building a harmonious society. The employment situation of private enterprises has always been of great concern to the outside world, and these two major jobs have always occupied an important position in the employment field of China that cannot be ignored. With the establishment of the market economy system, individual and private enterprises have become important components of the socialist economy, making significant contributions to economic development and social progress. The rapid development of China’s economy, on the one hand, is the embodiment of the superiority of China’s socialist market economic system, and on the other hand, it is the role of the tertiary industry and private enterprises in promoting the national economy. Since the 1990s, China’s private enterprises have become a new economic growth point for local and even national countries, and are one of the important ways to arrange employment and achieve social stability. This paper studies the employment of private enterprises and individuals from the perspective of statistics, extracts relevant data from China statistical Yearbook, uses the relevant knowledge of statistics to process the data, obtains the conclusion and puts forward relevant constructive suggestions.
文摘Internet services and web-based applications play pivotal roles in various sensitive domains, encompassing e-commerce, e-learning, e-healthcare, and e-payment. However, safeguarding these services poses a significant challenge, as the need for robust security measures becomes increasingly imperative. This paper presented an innovative method based on differential analyses to detect abrupt changes in network traffic characteristics. The core concept revolves around identifying abrupt alterations in certain characteristics such as input/output volume, the number of TCP connections, or DNS queries—within the analyzed traffic. Initially, the traffic is segmented into distinct sequences of slices, followed by quantifying specific characteristics for each slice. Subsequently, the distance between successive values of these measured characteristics is computed and clustered to detect sudden changes. To accomplish its objectives, the approach combined several techniques, including propositional logic, distance metrics (e.g., Kullback-Leibler Divergence), and clustering algorithms (e.g., K-means). When applied to two distinct datasets, the proposed approach demonstrates exceptional performance, achieving detection rates of up to 100%.