Handoff in IEEE 802.11 requires the repeated authentication and key exchange procedures, which will make the provision of seamless services in wireless LAN more difficult. To reduce the overhead, the proactive caching...Handoff in IEEE 802.11 requires the repeated authentication and key exchange procedures, which will make the provision of seamless services in wireless LAN more difficult. To reduce the overhead, the proactive caching schemes have been proposed. However, they require too many control packets delivering the security context information to neighbor access points. Our contribution' is made in two-fold: one is a significant decrease in the number of control packets for proactive caching and the other is a superior cache replacement algorithm.展开更多
A biclustering algorithm extends conventional clustering techniques to extract all of the meaningful subgroups of genes and conditions in the expression matrix of a microarray dataset. However, such algorithms are ver...A biclustering algorithm extends conventional clustering techniques to extract all of the meaningful subgroups of genes and conditions in the expression matrix of a microarray dataset. However, such algorithms are very sensitive to input parameters and show poor scalability. This paper proposes a scalable unsupervised biclustering framework, SUBic, to find high quality constant-row biclusters in an expression matrix effectively. A one-dimensional clustering algorithm is proposed to partition the attributes, that is, columns of an expression matrix into disjoint groups based on the similarity of expression values. These groups form a set of short transactions and are used to discover a set of frequent itemsets each of which corresponds to a bicluster. However, a bicluster may include any attribute whose expression value is not similar enough to others, so a bicluster refinement is used to enhance the quality of a bicluster by removing those attributes based on its distribution of expression values. The performance of the proposed method is comparatively analyzed through a series of experiments on synthetic and real datasets.展开更多
This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC...This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.展开更多
Over the past several decades, biologists have conducted numerous studies examining both general and specific functions of proteins. Generally, if similarities in either the structure or sequence of amino acids exist ...Over the past several decades, biologists have conducted numerous studies examining both general and specific functions of proteins. Generally, if similarities in either the structure or sequence of amino acids exist for two proteins, then a common biological function is expected. Protein function is determined primarily based on the structure rather than the sequence of amino acids. The algorithm for protein structure alignment is an essential tool for the research. The quality of the algorithm depends on the quality of the similarity measure that is used, and the similarity measure is an objective function used to determine the best alignment because of their individual strength and weakness However, none of existing similarity measures became golden standard They require excessive filtering to find a single alignment. In this paper, we introduce a new strategy that finds not a single alignment, but multiple alignments with different lengths. This method has obvious benefits of high quality alignment. However, this novel method leads to a new problem that the running time for this method is considerably longer than that for methods that find only a single alignment. To address this problem~ we propose algorithms that can locate a common region (CORE) of multiple alignment candidates, and can then extend the CORE into multiple alignments. Because the CORE can be defined from a final alignment, we introduce CORE* that is similar to CORE and propose an algorithm to identify the CORE*. By adopting CORE* and dynamic programming, our proposed method produces multiple alignments of various lengths with higher accuracy than previous methods. In the experiments, the alignments identified by our algorithm are longer than those obtained by TM-align by 17% and 15.48%, on average, when the comparison is conducted at the level of super-family and fold, respectively.展开更多
It is our great pleasure to edit this special section of the Journal of Computer Science and Technology (JCST). The database field has experienced a rapid growth with increasing of data. Therefore, novel technology ...It is our great pleasure to edit this special section of the Journal of Computer Science and Technology (JCST). The database field has experienced a rapid growth with increasing of data. Therefore, novel technology for covering emerging databases such as network or graph analysis, spatial or temporal data analysis, search, recommendation, and data mining is required. The goal of the section is to provide state-of-the-art research issues, challenges, new technologies, and solutions of emerging databases. This section publishes seven interesting articles related to query processing, trajectory data reduction, botnet evolution, recommendation system, bielustering, and protein structure alignment. The articles are summarized as follows.展开更多
文摘Handoff in IEEE 802.11 requires the repeated authentication and key exchange procedures, which will make the provision of seamless services in wireless LAN more difficult. To reduce the overhead, the proactive caching schemes have been proposed. However, they require too many control packets delivering the security context information to neighbor access points. Our contribution' is made in two-fold: one is a significant decrease in the number of control packets for proactive caching and the other is a superior cache replacement algorithm.
基金supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education,Science and Technology (MEST) of Korea under Grant No. 2011-0016648
文摘A biclustering algorithm extends conventional clustering techniques to extract all of the meaningful subgroups of genes and conditions in the expression matrix of a microarray dataset. However, such algorithms are very sensitive to input parameters and show poor scalability. This paper proposes a scalable unsupervised biclustering framework, SUBic, to find high quality constant-row biclusters in an expression matrix effectively. A one-dimensional clustering algorithm is proposed to partition the attributes, that is, columns of an expression matrix into disjoint groups based on the similarity of expression values. These groups form a set of short transactions and are used to discover a set of frequent itemsets each of which corresponds to a bicluster. However, a bicluster may include any attribute whose expression value is not similar enough to others, so a bicluster refinement is used to enhance the quality of a bicluster by removing those attributes based on its distribution of expression values. The performance of the proposed method is comparatively analyzed through a series of experiments on synthetic and real datasets.
文摘This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.
基金supported by Basic Science Research Program through the National Research Foundation of Korea (NRF)funded by the Ministry of Education,Science and Technology of Korea under Grant No.2012R1A1A3013084
文摘Over the past several decades, biologists have conducted numerous studies examining both general and specific functions of proteins. Generally, if similarities in either the structure or sequence of amino acids exist for two proteins, then a common biological function is expected. Protein function is determined primarily based on the structure rather than the sequence of amino acids. The algorithm for protein structure alignment is an essential tool for the research. The quality of the algorithm depends on the quality of the similarity measure that is used, and the similarity measure is an objective function used to determine the best alignment because of their individual strength and weakness However, none of existing similarity measures became golden standard They require excessive filtering to find a single alignment. In this paper, we introduce a new strategy that finds not a single alignment, but multiple alignments with different lengths. This method has obvious benefits of high quality alignment. However, this novel method leads to a new problem that the running time for this method is considerably longer than that for methods that find only a single alignment. To address this problem~ we propose algorithms that can locate a common region (CORE) of multiple alignment candidates, and can then extend the CORE into multiple alignments. Because the CORE can be defined from a final alignment, we introduce CORE* that is similar to CORE and propose an algorithm to identify the CORE*. By adopting CORE* and dynamic programming, our proposed method produces multiple alignments of various lengths with higher accuracy than previous methods. In the experiments, the alignments identified by our algorithm are longer than those obtained by TM-align by 17% and 15.48%, on average, when the comparison is conducted at the level of super-family and fold, respectively.
文摘It is our great pleasure to edit this special section of the Journal of Computer Science and Technology (JCST). The database field has experienced a rapid growth with increasing of data. Therefore, novel technology for covering emerging databases such as network or graph analysis, spatial or temporal data analysis, search, recommendation, and data mining is required. The goal of the section is to provide state-of-the-art research issues, challenges, new technologies, and solutions of emerging databases. This section publishes seven interesting articles related to query processing, trajectory data reduction, botnet evolution, recommendation system, bielustering, and protein structure alignment. The articles are summarized as follows.