China's civil service, dubbed the "iron rice bowl," is no longer viewed as a dream job by job hunters. During this year's recruitment season from February to mid-March, more than 10,000 civil servants switched to ...China's civil service, dubbed the "iron rice bowl," is no longer viewed as a dream job by job hunters. During this year's recruitment season from February to mid-March, more than 10,000 civil servants switched to the private sector, a 34-percent increase over last year, according to a report on the flow of professionals between different sectors. The report by Zhaopin.com, a major employment website in China, says following the Spring Festival in February,展开更多
It is of great significance to improve the efficiency of railway production and operation by realizing the fault knowledge association through the efficient data mining algorithm.However,high utility quantitative freq...It is of great significance to improve the efficiency of railway production and operation by realizing the fault knowledge association through the efficient data mining algorithm.However,high utility quantitative frequent pattern mining algorithms in the field of data mining still suffer from the problems of low time-memory performance and are not easy to scale up.In the context of such needs,we propose a related degree-based frequent pattern mining algorithm,named Related High Utility Quantitative Item set Mining(RHUQI-Miner),to enable the effective mining of railway fault data.The algorithm constructs the item-related degree structure of fault data and gives a pruning optimization strategy to find frequent patterns with higher related degrees,reducing redundancy and invalid frequent patterns.Subsequently,it uses the fixed pattern length strategy to modify the utility information of the item in the mining process so that the algorithm can control the length of the output frequent pattern according to the actual data situation and further improve the performance and practicability of the algorithm.The experimental results on the real fault dataset show that RHUQI-Miner can effectively reduce the time and memory consumption in the mining process,thus providing data support for differentiated and precise maintenance strategies.展开更多
A new algorithm based on an FC-tree (frequent closed pattern tree) and a max-FCIA (maximal frequent closed itemsets algorithm) is presented, which is used to mine the frequent closed itemsets for solving memory an...A new algorithm based on an FC-tree (frequent closed pattern tree) and a max-FCIA (maximal frequent closed itemsets algorithm) is presented, which is used to mine the frequent closed itemsets for solving memory and time consuming problems. This algorithm maps the transaction database by using a Hash table,gets the support of all frequent itemsets through operating the Hash table and forms a lexicographic subset tree including the frequent itemsets.Efficient pruning methods are used to get the FC-tree including all the minimum frequent closed itemsets through processing the lexicographic subset tree.Finally,frequent closed itemsets are generated from minimum frequent closed itemsets.The experimental results show that the mapping transaction database is introduced in the algorithm to reduce time consumption and to improve the efficiency of the program.Furthermore,the effective pruning strategy restrains the number of candidates,which saves space.The results show that the algorithm is effective.展开更多
Numerous models have been proposed to reduce the classification error of Naive Bayes by weakening its attribute independence assumption and some have demonstrated remarkable error performance. Considering that ensembl...Numerous models have been proposed to reduce the classification error of Naive Bayes by weakening its attribute independence assumption and some have demonstrated remarkable error performance. Considering that ensemble learning is an effective method of reducing the classifmation error of the classifier, this paper proposes a double-layer Bayesian classifier ensembles (DLBCE) algorithm based on frequent itemsets. DLBCE constructs a double-layer Bayesian classifier (DLBC) for each frequent itemset the new instance contained and finally ensembles all the classifiers by assigning different weight to different classifier according to the conditional mutual information. The experimental results show that the proposed algorithm outperforms other outstanding algorithms.展开更多
With the development of information technology, the amount of power grid topology data has gradually increased. Therefore, accurate querying of this data has become particularly important. Several researchers have cho...With the development of information technology, the amount of power grid topology data has gradually increased. Therefore, accurate querying of this data has become particularly important. Several researchers have chosen different indexing methods in the filtering stage to obtain more optimized query results because currently there is no uniform and efficient indexing mechanism that achieves good query results. In the traditional algorithm, the hash table for index storage is prone to "collision" problems, which decrease the index construction efficiency. Aiming at the problem of quick index entry, based on the construction of frequent subgraph indexes, a method of serialized storage optimization based on multiple hash tables is proposed. This method mainly uses the exploration sequence to make the keywords evenly distributed; it avoids conflicts of the stored procedure and performs a quick search of the index. The proposed algorithm mainly adopts the "filterverify" mechanism; in the filtering stage, the index is first established offline, and then the frequent subgraphs are found using the "contains logic" rule to obtain the candidate set. Experimental results show that this method can reduce the time and scale of candidate set generation and improve query efficiency.展开更多
Mining frequent pattern in transaction database, time series databases, and many other kinds of databases have been studied popularly in data mining research. Most of the previous studies adopt Apriori like candidat...Mining frequent pattern in transaction database, time series databases, and many other kinds of databases have been studied popularly in data mining research. Most of the previous studies adopt Apriori like candidate set generation and test approach. However, candidate set generation is very costly. Han J. proposed a novel algorithm FP growth that could generate frequent pattern without candidate set. Based on the analysis of the algorithm FP growth, this paper proposes a concept of equivalent FP tree and proposes an improved algorithm, denoted as FP growth * , which is much faster in speed, and easy to realize. FP growth * adopts a modified structure of FP tree and header table, and only generates a header table in each recursive operation and projects the tree to the original FP tree. The two algorithms get the same frequent pattern set in the same transaction database, but the performance study on computer shows that the speed of the improved algorithm, FP growth * , is at least two times as fast as that of FP growth.展开更多
We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns. WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and tri...We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns. WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database. Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree. The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.展开更多
Shake table testing was performed to investigate the dynamic stability of a mid-dip bedding rock slope under frequent earthquakes. Then, numerical modelling was established to further study the slope dynamic stability...Shake table testing was performed to investigate the dynamic stability of a mid-dip bedding rock slope under frequent earthquakes. Then, numerical modelling was established to further study the slope dynamic stability under purely microseisms and the influence of five factors, including seismic amplitude, slope height, slope angle, strata inclination and strata thickness, were considered. The experimental results show that the natural frequency of the slope decreases and damping ratio increases as the earthquake loading times increase. The dynamic strength reduction method is adopted for the stability evaluation of the bedding rock slope in numerical simulation, and the slope stability decreases with the increase of seismic amplitude, increase of slope height, reduction of strata thickness and increase of slope angle. The failure mode of a mid-dip bedding rock slope in the shaking table test is integral slipping along the bedding surface with dipping tensile cracks at the slope rear edge going through the bedding surfaces. In the numerical simulation, the long-term stability of a mid-dip bedding slope is worst under frequent microseisms and the slope is at risk of integral sliding instability, whereas the slope rock mass is more broken than shown in the shaking table test. The research results are of practical significance to better understand the formation mechanism of reservoir landslides and prevent future landslide disasters.展开更多
On-site stormwater detention (OSD) is a conventional component of urban drainage systems, designed with the intention of mitigating the increase to peak discharge of stormwater runoff that inevitably results from urba...On-site stormwater detention (OSD) is a conventional component of urban drainage systems, designed with the intention of mitigating the increase to peak discharge of stormwater runoff that inevitably results from urbanization. In Australia, singular temporal patterns for design storms have governed the inputs of hydrograph generation and in turn the design process of OSD for the last three decades. This paper raises the concern that many existing OSD systems designed using the singular temporal pattern for design storms may not be achieving their stated objectives when they are assessed against a variety of alternative temporal patterns. The performance of twenty real OSD systems was investigated using two methods:(1) ensembles of design temporal patterns prescribed in the latest version of Australian Rainfall and Runoff, and (2) real recorded rainfall data taken from pluviograph stations modeled with continuous simulation. It is shown conclusively that the use of singular temporal patterns is ineffective in providing assurance that an OSD will mitigate the increase to peak discharge for all possible storm events. Ensemble analysis is shown to provide improved results. However, it also falls short of providing any guarantee in the face of naturally occurring rainfall.展开更多
Because data warehouse is frequently changing, incremental data leads to old knowledge which is mined formerly unavailable. In order to maintain the discovered knowledge and patterns dynamically, this study presents a...Because data warehouse is frequently changing, incremental data leads to old knowledge which is mined formerly unavailable. In order to maintain the discovered knowledge and patterns dynamically, this study presents a novel algorithm updating for global frequent patterns-IPARUC. A rapid clustering method is introduced to divide database into n parts in IPARUC firstly, where the data are similar in the same part. Then, the nodes in the tree are adjusted dynamically in inserting process by "pruning and laying back" to keep the frequency descending order so that they can be shared to approaching optimization. Finally local frequent itemsets mined from each local dataset are merged into global frequent itemsets. The results of experimental study are very encouraging. It is obvious from experiment that IPARUC is more effective and efficient than other two contrastive methods. Furthermore, there is significant application potential to a prototype of Web log Analyzer in web usage mining that can help us to discover useful knowledge effectively, even help managers making decision.展开更多
Current technology for frequent itemset mining mostly applies to the data stored in a single transaction database. This paper presents a novel algorithm MultiClose for frequent itemset mining in data warehouses. Multi...Current technology for frequent itemset mining mostly applies to the data stored in a single transaction database. This paper presents a novel algorithm MultiClose for frequent itemset mining in data warehouses. MultiClose respectively computes the results in single dimension tables and merges the results with a very efficient approach. Close itemsets technique is used to improve the performance of the algorithm. The authors propose an efficient implementation for star schemas in which their al- gorithm outperforms state-of-the-art single-table algorithms.展开更多
A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial partic...A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial particles was designed to ensure the reasonable initial fitness, and then, the dynamically dimensionality cutting of dataset was built to decrease the search space. Based on four high-dimensional datasets, BPSO-HD was compared with Apriori to test its reliability, and was compared with the ordinary BPSO and quantum swarm evolutionary(QSE) to prove its advantages. The experiments show that the results given by BPSO-HD is reliable and better than the results generated by BPSO and QSE.展开更多
Because mining complete set of frequent patterns from dense database could be impractical, an interesting alternative has been proposed recently. Instead of mining the complete set of frequent patterns, the new model ...Because mining complete set of frequent patterns from dense database could be impractical, an interesting alternative has been proposed recently. Instead of mining the complete set of frequent patterns, the new model only finds out the maximal frequent patterns, which can generate all frequent patterns. FP-growth algorithm is one of the most efficient frequent-pattern mining methods published so far. However, because FP-tree and conditional FP-trees must be two-way traversable, a great deal memory is needed in process of mining. This paper proposes an efficient algorithm Unid_FP-Max for mining maximal frequent patterns based on unidirectional FP-tree. Because of generation method of unidirectional FP-tree and conditional unidirectional FP-trees, the algorithm reduces the space consumption to the fullest extent. With the development of two techniques: single path pruning and header table pruning which can cut down many conditional unidirectional FP-trees generated recursively in mining process, Unid_FP-Max further lowers the expense of time and space.展开更多
Frequent itemset mining is an essential problem in data mining and plays a key role in many data mining applications.However,users’personal privacy will be leaked in the mining process.In recent years,application of ...Frequent itemset mining is an essential problem in data mining and plays a key role in many data mining applications.However,users’personal privacy will be leaked in the mining process.In recent years,application of local differential privacy protection models to mine frequent itemsets is a relatively reliable and secure protection method.Local differential privacy means that users first perturb the original data and then send these data to the aggregator,preventing the aggregator from revealing the user’s private information.We propose a novel framework that implements frequent itemset mining under local differential privacy and is applicable to user’s multi-attribute.The main technique has bitmap encoding for converting the user’s original data into a binary string.It also includes how to choose the best perturbation algorithm for varying user attributes,and uses the frequent pattern tree(FP-tree)algorithm to mine frequent itemsets.Finally,we incorporate the threshold random response(TRR)algorithm in the framework and compare it with the existing algorithms,and demonstrate that the TRR algorithm has higher accuracy for mining frequent itemsets.展开更多
Currently,the top-rank-k has been widely applied to mine frequent patterns with a rank not exceeding k.In the existing algorithms,although a level-wise-search could fully mine the target patterns,it usually leads to t...Currently,the top-rank-k has been widely applied to mine frequent patterns with a rank not exceeding k.In the existing algorithms,although a level-wise-search could fully mine the target patterns,it usually leads to the delay of high rank patterns generation,resulting in the slow growth of the support threshold and the mining efficiency.Aiming at this problem,a greedy-strategy-based top-rank-k frequent patterns hybrid mining algorithm(GTK)is proposed in this paper.In this algorithm,top-rank-k patterns are stored in a static doubly linked list called RSL,and the patterns are divided into short patterns and long patterns.The short patterns generated by a rank-first-search always joins the two patterns of the highest rank in RSL that have not yet been joined.On the basis of the short patterns satisfying specific conditions,the long patterns are extracted through level-wise-search.To reduce redundancy,GTK improves the generation method of subsume index and designs the new pruning strategies of candidates.This algorithm also takes the use of reasonable pruning strategies to reduce the amount of computation to improve the computational speed.Real datasets and synthetic datasets are adopted in experiments to evaluate the proposed algorithm.The experimental results show the obvious advantages in both time efficiency and space efficiency of GTK.展开更多
Debris slopes are widely distributed across the Three Gorges Reservoir area in China,and seasonal fluctuations of the water level in the area tend to cause high-frequency microseisms that subsequently induce landslide...Debris slopes are widely distributed across the Three Gorges Reservoir area in China,and seasonal fluctuations of the water level in the area tend to cause high-frequency microseisms that subsequently induce landslides on such debris slopes.In this study,a cumulative damage model of debris slope with varying slope characteristics under the effects of frequent microseisms was established,based on the accurate definition of slope damage variables.The cumulative damage behaviour and the mechanisms of slope instability and sliding under frequent microseisms were thus systematically investigated through a series of shaking table tests and discrete element numerical simulations,and the influences of related parameters such as bedrock,dry density and stone content were discussed.The results showed that the instability mode of a debris slope can be divided into a vibration-compaction stage,a crack generation stage,a crack development stage,and an instability stage.Under the action of frequent microseisms,debris slope undergoes the last three stages cyclically,which causes the accumulation to slide out in layers under the synergistic action of tension and shear,causing the slope to become destabilised.There are two sliding surfaces as well as the parallel tensile surfaces in the final instability of the debris slope.In the process of instability,the development trend of the damage accumulation curve remains similar for debris slopes with different parameters.However,the initial vibration compaction effect in the bedrock-free model is stronger than that in the bedrock model,with the overall cumulative damage degree in the former being lower than that of the latter.The damage degree of the debris slope with high dry density also develops more slowly than that of the debris slope with low dry density.The damage development rate of the debris slope does not always decrease with the increase of stone content.The damage degree growth rate of the debris slope with the optimal stone content is the lowest,and the increase or decrease of the stone content makes the debris slope instability happen earlier.The numerical simulation study also further reveals that the damage in the debris slope mainly develops in the form of crack formation and penetration,in which,shear failure occurs more frequently in the debris slope.展开更多
In this study, three high frequent occurrence regions of tropical cyclones(TCs), i.e., the northern South China Sea(the region S), the south Philippine Sea(the region P) and the region east of Taiwan Island(the region...In this study, three high frequent occurrence regions of tropical cyclones(TCs), i.e., the northern South China Sea(the region S), the south Philippine Sea(the region P) and the region east of Taiwan Island(the region E), are defined with frequency of TC's occurrence at each grid for a 45-year period(1965–2009), where the frequency of occurrence(FO) of TCs is triple the mean value of the whole western North Pacific. Over the region S, there are decreasing trends in the FO of TCs, the number of TCs' tracks going though this region and the number of TCs' genesis in this region. Over the region P, the FO and tracks demonstrate decadal variation with periods of 10–12 year, while over the region E, a significant 4–5 years' oscillation appears in both FO and tracks. It is demonstrated that the differences of TCs' variation in these three different regions are mainly caused by the variation of the Western Pacific Subtropical High(WPSH) at different time scales. The westward shift of WPSH is responsible for the northwesterly anomaly over the region S which inhibits westward TC movement into the region S. On the decadal timescale, the WPSH stretches northwestward because of the anomalous anticyclone over the northwestern part of the region P, and steers more TCs reaching the region P in the greater FO years of the region P. The retreating of the WPSH on the interannual time scale is the main reason for the FO's oscillation over the region E.展开更多
By rapid progress of network and storage technologies, a huge amount of electronic data such as Web pages and XML has been available on Internet. In this paper, we study a data-mining problem of discovering frequent o...By rapid progress of network and storage technologies, a huge amount of electronic data such as Web pages and XML has been available on Internet. In this paper, we study a data-mining problem of discovering frequent ordered sub-trees in a large collection of XML data, where both of the patterns and the data are modeled by labeled ordered trees. We present an efficient algorithm of Ordered Subtree Miner (OSTMiner) based on two- layer neural networks with Hebb rule, that computes all ordered sub-trees appearing in a collection of XML trees with frequent above a user-specified threshold using a special structure EM-tree. In this algo- rithm, EM-tree is used as an extended merging tree to supply scheme information for efficient pruning and mining frequent sub-trees. Experiments results showed that OSTMiner has good response time and scales well.展开更多
BACKGROUND: To determine if elderly frequent attenders are associated with increased 30-day mortality, assess resource utilization by the elderly frequent attenders and identify associated characteristics that contrib...BACKGROUND: To determine if elderly frequent attenders are associated with increased 30-day mortality, assess resource utilization by the elderly frequent attenders and identify associated characteristics that contribute to mortality. METHODS: Retrospective observational study of electronic clinical records of all emergency department(ED) visits over a 10-year period to an urban tertiary general hospital in Singapore. Patients aged 65 years and older, with 3 or more visits within a calendar year were identified. Outcomes measured include 30-day mortality, admission rate, admission diagnosis and duration spent at ED. Chi-square-tests were used to assess categorical factors and Student t-test was used to assess continuous variables on their association with being a frequent attender. Univariate and multivariate logistic regressions were conducted on all significant independent factors on to the outcome variable(30-day mortality), to determine factor independent odds ratios of being a frequent attender.RESULTS: 1.381 million attendance records were analyzed. Elderly patients accounted for 25.5% of all attendances, of which 31.3% are frequent attenders. Their 30-day mortality rate increased from 4.0% in the first visit, to 8.8% in the third visit, peaking at 10.2% in the sixth visit. Factors associated with mortality include patients with neoplasms, ambulance utilization, male gender and having attended the ED the previous year.CONCLUSION: Elderly attenders have a higher 30-day mortality risk compared to the overall ED population, with mortality risk more marked for frequent attenders. This study illustrates the importance and need for interventions to address frequent ED visits by the elderly, especially in an aging society.展开更多
文摘China's civil service, dubbed the "iron rice bowl," is no longer viewed as a dream job by job hunters. During this year's recruitment season from February to mid-March, more than 10,000 civil servants switched to the private sector, a 34-percent increase over last year, according to a report on the flow of professionals between different sectors. The report by Zhaopin.com, a major employment website in China, says following the Spring Festival in February,
基金supported by the Research on Key Technologies and Typical Applications of Big Data in Railway Production and Operation(P2023S006)the Fundamental Research Funds for the Central Universities(2022JBZY023).
文摘It is of great significance to improve the efficiency of railway production and operation by realizing the fault knowledge association through the efficient data mining algorithm.However,high utility quantitative frequent pattern mining algorithms in the field of data mining still suffer from the problems of low time-memory performance and are not easy to scale up.In the context of such needs,we propose a related degree-based frequent pattern mining algorithm,named Related High Utility Quantitative Item set Mining(RHUQI-Miner),to enable the effective mining of railway fault data.The algorithm constructs the item-related degree structure of fault data and gives a pruning optimization strategy to find frequent patterns with higher related degrees,reducing redundancy and invalid frequent patterns.Subsequently,it uses the fixed pattern length strategy to modify the utility information of the item in the mining process so that the algorithm can control the length of the output frequent pattern according to the actual data situation and further improve the performance and practicability of the algorithm.The experimental results on the real fault dataset show that RHUQI-Miner can effectively reduce the time and memory consumption in the mining process,thus providing data support for differentiated and precise maintenance strategies.
基金The National Natural Science Foundation of China(No.60603047)the Natural Science Foundation of Liaoning ProvinceLiaoning Higher Education Research Foundation(No.2008341)
文摘A new algorithm based on an FC-tree (frequent closed pattern tree) and a max-FCIA (maximal frequent closed itemsets algorithm) is presented, which is used to mine the frequent closed itemsets for solving memory and time consuming problems. This algorithm maps the transaction database by using a Hash table,gets the support of all frequent itemsets through operating the Hash table and forms a lexicographic subset tree including the frequent itemsets.Efficient pruning methods are used to get the FC-tree including all the minimum frequent closed itemsets through processing the lexicographic subset tree.Finally,frequent closed itemsets are generated from minimum frequent closed itemsets.The experimental results show that the mapping transaction database is introduced in the algorithm to reduce time consumption and to improve the efficiency of the program.Furthermore,the effective pruning strategy restrains the number of candidates,which saves space.The results show that the algorithm is effective.
基金supported by National Natural Science Foundation of China (Nos. 61073133, 60973067, and 61175053)Fundamental Research Funds for the Central Universities of China(No. 2011ZD010)
文摘Numerous models have been proposed to reduce the classification error of Naive Bayes by weakening its attribute independence assumption and some have demonstrated remarkable error performance. Considering that ensemble learning is an effective method of reducing the classifmation error of the classifier, this paper proposes a double-layer Bayesian classifier ensembles (DLBCE) algorithm based on frequent itemsets. DLBCE constructs a double-layer Bayesian classifier (DLBC) for each frequent itemset the new instance contained and finally ensembles all the classifiers by assigning different weight to different classifier according to the conditional mutual information. The experimental results show that the proposed algorithm outperforms other outstanding algorithms.
基金supported by the State Grid Science and Technology Project (Title: Research on High Performance Analysis Technology of Power Grid GIS Topology Based on Graph Database, 5455HJ160005)
文摘With the development of information technology, the amount of power grid topology data has gradually increased. Therefore, accurate querying of this data has become particularly important. Several researchers have chosen different indexing methods in the filtering stage to obtain more optimized query results because currently there is no uniform and efficient indexing mechanism that achieves good query results. In the traditional algorithm, the hash table for index storage is prone to "collision" problems, which decrease the index construction efficiency. Aiming at the problem of quick index entry, based on the construction of frequent subgraph indexes, a method of serialized storage optimization based on multiple hash tables is proposed. This method mainly uses the exploration sequence to make the keywords evenly distributed; it avoids conflicts of the stored procedure and performs a quick search of the index. The proposed algorithm mainly adopts the "filterverify" mechanism; in the filtering stage, the index is first established offline, and then the frequent subgraphs are found using the "contains logic" rule to obtain the candidate set. Experimental results show that this method can reduce the time and scale of candidate set generation and improve query efficiency.
基金theFundoftheNationalManagementBureauofTraditionalChineseMedicine(No .2 0 0 0 J P 5 4 )
文摘Mining frequent pattern in transaction database, time series databases, and many other kinds of databases have been studied popularly in data mining research. Most of the previous studies adopt Apriori like candidate set generation and test approach. However, candidate set generation is very costly. Han J. proposed a novel algorithm FP growth that could generate frequent pattern without candidate set. Based on the analysis of the algorithm FP growth, this paper proposes a concept of equivalent FP tree and proposes an improved algorithm, denoted as FP growth * , which is much faster in speed, and easy to realize. FP growth * adopts a modified structure of FP tree and header table, and only generates a header table in each recursive operation and projects the tree to the original FP tree. The two algorithms get the same frequent pattern set in the same transaction database, but the performance study on computer shows that the speed of the improved algorithm, FP growth * , is at least two times as fast as that of FP growth.
文摘We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns. WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database. Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree. The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.
基金National Natural Science Foundation of China under Grant No. 41372356the College Cultivation Project of the National Natural Science Foundation of China under Grant No. 2018PY30+1 种基金the Basic Research and Frontier Exploration Project of Chongqing,China under Grant No. cstc2018jcyj A1597the Graduate Scientific Research and Innovation Foundation of Chongqing,China under Grant No. CYS18026。
文摘Shake table testing was performed to investigate the dynamic stability of a mid-dip bedding rock slope under frequent earthquakes. Then, numerical modelling was established to further study the slope dynamic stability under purely microseisms and the influence of five factors, including seismic amplitude, slope height, slope angle, strata inclination and strata thickness, were considered. The experimental results show that the natural frequency of the slope decreases and damping ratio increases as the earthquake loading times increase. The dynamic strength reduction method is adopted for the stability evaluation of the bedding rock slope in numerical simulation, and the slope stability decreases with the increase of seismic amplitude, increase of slope height, reduction of strata thickness and increase of slope angle. The failure mode of a mid-dip bedding rock slope in the shaking table test is integral slipping along the bedding surface with dipping tensile cracks at the slope rear edge going through the bedding surfaces. In the numerical simulation, the long-term stability of a mid-dip bedding slope is worst under frequent microseisms and the slope is at risk of integral sliding instability, whereas the slope rock mass is more broken than shown in the shaking table test. The research results are of practical significance to better understand the formation mechanism of reservoir landslides and prevent future landslide disasters.
文摘On-site stormwater detention (OSD) is a conventional component of urban drainage systems, designed with the intention of mitigating the increase to peak discharge of stormwater runoff that inevitably results from urbanization. In Australia, singular temporal patterns for design storms have governed the inputs of hydrograph generation and in turn the design process of OSD for the last three decades. This paper raises the concern that many existing OSD systems designed using the singular temporal pattern for design storms may not be achieving their stated objectives when they are assessed against a variety of alternative temporal patterns. The performance of twenty real OSD systems was investigated using two methods:(1) ensembles of design temporal patterns prescribed in the latest version of Australian Rainfall and Runoff, and (2) real recorded rainfall data taken from pluviograph stations modeled with continuous simulation. It is shown conclusively that the use of singular temporal patterns is ineffective in providing assurance that an OSD will mitigate the increase to peak discharge for all possible storm events. Ensemble analysis is shown to provide improved results. However, it also falls short of providing any guarantee in the face of naturally occurring rainfall.
基金Supported by the National Natural Science Foundation of China(60472099)Ningbo Natural Science Foundation(2006A610017)
文摘Because data warehouse is frequently changing, incremental data leads to old knowledge which is mined formerly unavailable. In order to maintain the discovered knowledge and patterns dynamically, this study presents a novel algorithm updating for global frequent patterns-IPARUC. A rapid clustering method is introduced to divide database into n parts in IPARUC firstly, where the data are similar in the same part. Then, the nodes in the tree are adjusted dynamically in inserting process by "pruning and laying back" to keep the frequency descending order so that they can be shared to approaching optimization. Finally local frequent itemsets mined from each local dataset are merged into global frequent itemsets. The results of experimental study are very encouraging. It is obvious from experiment that IPARUC is more effective and efficient than other two contrastive methods. Furthermore, there is significant application potential to a prototype of Web log Analyzer in web usage mining that can help us to discover useful knowledge effectively, even help managers making decision.
文摘Current technology for frequent itemset mining mostly applies to the data stored in a single transaction database. This paper presents a novel algorithm MultiClose for frequent itemset mining in data warehouses. MultiClose respectively computes the results in single dimension tables and merges the results with a very efficient approach. Close itemsets technique is used to improve the performance of the algorithm. The authors propose an efficient implementation for star schemas in which their al- gorithm outperforms state-of-the-art single-table algorithms.
文摘A novel binary particle swarm optimization for frequent item sets mining from high-dimensional dataset(BPSO-HD) was proposed, where two improvements were joined. Firstly, the dimensionality reduction of initial particles was designed to ensure the reasonable initial fitness, and then, the dynamically dimensionality cutting of dataset was built to decrease the search space. Based on four high-dimensional datasets, BPSO-HD was compared with Apriori to test its reliability, and was compared with the ordinary BPSO and quantum swarm evolutionary(QSE) to prove its advantages. The experiments show that the results given by BPSO-HD is reliable and better than the results generated by BPSO and QSE.
基金Supported by the National Natural Science Foundation of China ( No.60474022)Henan Innovation Project for University Prominent Research Talents (No.2007KYCX018)
文摘Because mining complete set of frequent patterns from dense database could be impractical, an interesting alternative has been proposed recently. Instead of mining the complete set of frequent patterns, the new model only finds out the maximal frequent patterns, which can generate all frequent patterns. FP-growth algorithm is one of the most efficient frequent-pattern mining methods published so far. However, because FP-tree and conditional FP-trees must be two-way traversable, a great deal memory is needed in process of mining. This paper proposes an efficient algorithm Unid_FP-Max for mining maximal frequent patterns based on unidirectional FP-tree. Because of generation method of unidirectional FP-tree and conditional unidirectional FP-trees, the algorithm reduces the space consumption to the fullest extent. With the development of two techniques: single path pruning and header table pruning which can cut down many conditional unidirectional FP-trees generated recursively in mining process, Unid_FP-Max further lowers the expense of time and space.
基金This paper is supported by the Inner Mongolia Natural Science Foundation(Grant Number:2018MS06026,Sponsored Authors:Liu,H.and Ma,X.,Sponsors’Websites:http://kjt.nmg.gov.cn/)the Science and Technology Program of Inner Mongolia Autonomous Region(Grant Number:2019GG116,Sponsored Authors:Liu,H.and Ma,X.,Sponsors’Websites:http://kjt.nmg.gov.cn/).
文摘Frequent itemset mining is an essential problem in data mining and plays a key role in many data mining applications.However,users’personal privacy will be leaked in the mining process.In recent years,application of local differential privacy protection models to mine frequent itemsets is a relatively reliable and secure protection method.Local differential privacy means that users first perturb the original data and then send these data to the aggregator,preventing the aggregator from revealing the user’s private information.We propose a novel framework that implements frequent itemset mining under local differential privacy and is applicable to user’s multi-attribute.The main technique has bitmap encoding for converting the user’s original data into a binary string.It also includes how to choose the best perturbation algorithm for varying user attributes,and uses the frequent pattern tree(FP-tree)algorithm to mine frequent itemsets.Finally,we incorporate the threshold random response(TRR)algorithm in the framework and compare it with the existing algorithms,and demonstrate that the TRR algorithm has higher accuracy for mining frequent itemsets.
基金This research was supported in part by the Hunan Province’s Strategic and Emerging Industrial Projects under Grant 2018GK4035in part by the Hunan Province’s Changsha Zhuzhou Xiangtan National Independent Innovation Demonstration Zone projects under Grant 2017XK2058+1 种基金in part by the National Natural Science Foundation of China under Grant 61602171in part by the Scientific Research Fund of Hunan Provincial Education Department under Grant 17C0960 and 18B037.
文摘Currently,the top-rank-k has been widely applied to mine frequent patterns with a rank not exceeding k.In the existing algorithms,although a level-wise-search could fully mine the target patterns,it usually leads to the delay of high rank patterns generation,resulting in the slow growth of the support threshold and the mining efficiency.Aiming at this problem,a greedy-strategy-based top-rank-k frequent patterns hybrid mining algorithm(GTK)is proposed in this paper.In this algorithm,top-rank-k patterns are stored in a static doubly linked list called RSL,and the patterns are divided into short patterns and long patterns.The short patterns generated by a rank-first-search always joins the two patterns of the highest rank in RSL that have not yet been joined.On the basis of the short patterns satisfying specific conditions,the long patterns are extracted through level-wise-search.To reduce redundancy,GTK improves the generation method of subsume index and designs the new pruning strategies of candidates.This algorithm also takes the use of reasonable pruning strategies to reduce the amount of computation to improve the computational speed.Real datasets and synthetic datasets are adopted in experiments to evaluate the proposed algorithm.The experimental results show the obvious advantages in both time efficiency and space efficiency of GTK.
基金funded by the Natural Science Foundation of Chongqing municipality(Grant No.CSTC2021JCYJMSXMX0558)the National Key R&D Program of China(Grant No.2018YFC1504802)the Fundamental Research Funds for the Central Universities(Project No.2019CDCG0013)。
文摘Debris slopes are widely distributed across the Three Gorges Reservoir area in China,and seasonal fluctuations of the water level in the area tend to cause high-frequency microseisms that subsequently induce landslides on such debris slopes.In this study,a cumulative damage model of debris slope with varying slope characteristics under the effects of frequent microseisms was established,based on the accurate definition of slope damage variables.The cumulative damage behaviour and the mechanisms of slope instability and sliding under frequent microseisms were thus systematically investigated through a series of shaking table tests and discrete element numerical simulations,and the influences of related parameters such as bedrock,dry density and stone content were discussed.The results showed that the instability mode of a debris slope can be divided into a vibration-compaction stage,a crack generation stage,a crack development stage,and an instability stage.Under the action of frequent microseisms,debris slope undergoes the last three stages cyclically,which causes the accumulation to slide out in layers under the synergistic action of tension and shear,causing the slope to become destabilised.There are two sliding surfaces as well as the parallel tensile surfaces in the final instability of the debris slope.In the process of instability,the development trend of the damage accumulation curve remains similar for debris slopes with different parameters.However,the initial vibration compaction effect in the bedrock-free model is stronger than that in the bedrock model,with the overall cumulative damage degree in the former being lower than that of the latter.The damage degree of the debris slope with high dry density also develops more slowly than that of the debris slope with low dry density.The damage development rate of the debris slope does not always decrease with the increase of stone content.The damage degree growth rate of the debris slope with the optimal stone content is the lowest,and the increase or decrease of the stone content makes the debris slope instability happen earlier.The numerical simulation study also further reveals that the damage in the debris slope mainly develops in the form of crack formation and penetration,in which,shear failure occurs more frequently in the debris slope.
基金supported by the National Natural Science Foundation of China(Nos. 41106018, 40975038)Program 973 (Nos. 2012CB417402, 2010CB950402, 2012CB955604)
文摘In this study, three high frequent occurrence regions of tropical cyclones(TCs), i.e., the northern South China Sea(the region S), the south Philippine Sea(the region P) and the region east of Taiwan Island(the region E), are defined with frequency of TC's occurrence at each grid for a 45-year period(1965–2009), where the frequency of occurrence(FO) of TCs is triple the mean value of the whole western North Pacific. Over the region S, there are decreasing trends in the FO of TCs, the number of TCs' tracks going though this region and the number of TCs' genesis in this region. Over the region P, the FO and tracks demonstrate decadal variation with periods of 10–12 year, while over the region E, a significant 4–5 years' oscillation appears in both FO and tracks. It is demonstrated that the differences of TCs' variation in these three different regions are mainly caused by the variation of the Western Pacific Subtropical High(WPSH) at different time scales. The westward shift of WPSH is responsible for the northwesterly anomaly over the region S which inhibits westward TC movement into the region S. On the decadal timescale, the WPSH stretches northwestward because of the anomalous anticyclone over the northwestern part of the region P, and steers more TCs reaching the region P in the greater FO years of the region P. The retreating of the WPSH on the interannual time scale is the main reason for the FO's oscillation over the region E.
基金Supported by Key Science-Technology Project ofHeilongjiang Province(GA010401-3)
文摘By rapid progress of network and storage technologies, a huge amount of electronic data such as Web pages and XML has been available on Internet. In this paper, we study a data-mining problem of discovering frequent ordered sub-trees in a large collection of XML data, where both of the patterns and the data are modeled by labeled ordered trees. We present an efficient algorithm of Ordered Subtree Miner (OSTMiner) based on two- layer neural networks with Hebb rule, that computes all ordered sub-trees appearing in a collection of XML trees with frequent above a user-specified threshold using a special structure EM-tree. In this algo- rithm, EM-tree is used as an extended merging tree to supply scheme information for efficient pruning and mining frequent sub-trees. Experiments results showed that OSTMiner has good response time and scales well.
文摘BACKGROUND: To determine if elderly frequent attenders are associated with increased 30-day mortality, assess resource utilization by the elderly frequent attenders and identify associated characteristics that contribute to mortality. METHODS: Retrospective observational study of electronic clinical records of all emergency department(ED) visits over a 10-year period to an urban tertiary general hospital in Singapore. Patients aged 65 years and older, with 3 or more visits within a calendar year were identified. Outcomes measured include 30-day mortality, admission rate, admission diagnosis and duration spent at ED. Chi-square-tests were used to assess categorical factors and Student t-test was used to assess continuous variables on their association with being a frequent attender. Univariate and multivariate logistic regressions were conducted on all significant independent factors on to the outcome variable(30-day mortality), to determine factor independent odds ratios of being a frequent attender.RESULTS: 1.381 million attendance records were analyzed. Elderly patients accounted for 25.5% of all attendances, of which 31.3% are frequent attenders. Their 30-day mortality rate increased from 4.0% in the first visit, to 8.8% in the third visit, peaking at 10.2% in the sixth visit. Factors associated with mortality include patients with neoplasms, ambulance utilization, male gender and having attended the ED the previous year.CONCLUSION: Elderly attenders have a higher 30-day mortality risk compared to the overall ED population, with mortality risk more marked for frequent attenders. This study illustrates the importance and need for interventions to address frequent ED visits by the elderly, especially in an aging society.