This paper aims at investigating the efficacy of different state-of-art damage detection methods when applied to real worm structures subjected to ground motion excitations, for which the literature contributions are,...This paper aims at investigating the efficacy of different state-of-art damage detection methods when applied to real worm structures subjected to ground motion excitations, for which the literature contributions are, at present, still not fully comprehensive. To this purpose the paper analyses two test structures: (1) a four-story scaled steel frame tested on a shake table in a controlled laboratory conditions, and (2) a seven-story reinforced concrete building monitored during the seismic excitations of the 1999 Chi-Chi (Taiwan) Earthquake main shock and numerous fore and afiershocks. Some model based damage approaches and statistics based damage indexes are reviewed. The different methodologies and indexes are, then, applied to the two test structures with the final aim of analysing their performance and validity within the case of a laboratory scaled model and a real world structure subjected to input ground motion.展开更多
Constraint based program analysis is widely used in program validation, program vulnerability analysis, etc. This paper proposes a temporal correlation function to protect programs from analysis. The temporal correlat...Constraint based program analysis is widely used in program validation, program vulnerability analysis, etc. This paper proposes a temporal correlation function to protect programs from analysis. The temporal correlation function can be applied to resist against both static and dynamic function summary and eoncolie testing. What' s more, the temporal correlation function can produce different outputs even with same input. This feature can be used to damage the premise of function summary as well as prevent concolie testing process to run the new branch with new input. Experiment results show that this method can reduce efficiency and path coverage of concolic testing, while greatly in- creasing the difficulty of constraint based program analysis.展开更多
Modem storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world propriet...Modem storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world proprietary datasets are too large to be copied onto a test storage system, and most data cannot be shared due to privacy issues, a benchmark needs to generate data synthetically. To ensure that the result is accurate, it is necessary to generate data content based on the characterization of real-world data properties that influence the storage system performance during the execution of a benchmark. The existing approach, called SDGen, cannot guarantee that the benchmark result is accurate in storage systems that have built-in word-based compressors. The reason is that SDGen characterizes the properties that influence compression performance only at the byte level, and no properties are characterized at the word level. To address this problem, we present TextGen, a realistic text data content generation method for modem storage system benchmarks. TextGen builds the word corpus by segmenting real-world text datasets, and creates a word-frequency distribution by counting each word in the corpus. To improve data generation performance, the word-frequency distribution is fitted to a lognormal distribution by maximum likelihood estimation. The Monte Carlo approach is used to generate synthetic data. The running time of TextGen generation depends only on the expected data size, which means that the time complexity of TextGen is O(n). To evaluate TextGen, four real-world datasets were used to perform an experiment. The experimental results show that, compared with SDGen, the compression performance and compression ratio of the datasets generated by TextGen deviate less from real-world datasets when end-tagged dense code, a representative of word-based compressors, is evaluated.展开更多
文摘This paper aims at investigating the efficacy of different state-of-art damage detection methods when applied to real worm structures subjected to ground motion excitations, for which the literature contributions are, at present, still not fully comprehensive. To this purpose the paper analyses two test structures: (1) a four-story scaled steel frame tested on a shake table in a controlled laboratory conditions, and (2) a seven-story reinforced concrete building monitored during the seismic excitations of the 1999 Chi-Chi (Taiwan) Earthquake main shock and numerous fore and afiershocks. Some model based damage approaches and statistics based damage indexes are reviewed. The different methodologies and indexes are, then, applied to the two test structures with the final aim of analysing their performance and validity within the case of a laboratory scaled model and a real world structure subjected to input ground motion.
基金Supported by the National Natural Science Foundation of China(No.61121061)National Key Technology R&D Program(No.2012BAH38B02,2012BAH06B00)
文摘Constraint based program analysis is widely used in program validation, program vulnerability analysis, etc. This paper proposes a temporal correlation function to protect programs from analysis. The temporal correlation function can be applied to resist against both static and dynamic function summary and eoncolie testing. What' s more, the temporal correlation function can produce different outputs even with same input. This feature can be used to damage the premise of function summary as well as prevent concolie testing process to run the new branch with new input. Experiment results show that this method can reduce efficiency and path coverage of concolic testing, while greatly in- creasing the difficulty of constraint based program analysis.
基金Project supported by the National Natural Science Foundation of China (Nos. 61572394 and 61272098), the Shenzhen Funda mental Research Plan (Nos. JCYJ20120615101127404 and JSGG20140519141854753), and thc National Kcy Technologies R&D Program of China (No. 2011BAH04B03)
文摘Modem storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world proprietary datasets are too large to be copied onto a test storage system, and most data cannot be shared due to privacy issues, a benchmark needs to generate data synthetically. To ensure that the result is accurate, it is necessary to generate data content based on the characterization of real-world data properties that influence the storage system performance during the execution of a benchmark. The existing approach, called SDGen, cannot guarantee that the benchmark result is accurate in storage systems that have built-in word-based compressors. The reason is that SDGen characterizes the properties that influence compression performance only at the byte level, and no properties are characterized at the word level. To address this problem, we present TextGen, a realistic text data content generation method for modem storage system benchmarks. TextGen builds the word corpus by segmenting real-world text datasets, and creates a word-frequency distribution by counting each word in the corpus. To improve data generation performance, the word-frequency distribution is fitted to a lognormal distribution by maximum likelihood estimation. The Monte Carlo approach is used to generate synthetic data. The running time of TextGen generation depends only on the expected data size, which means that the time complexity of TextGen is O(n). To evaluate TextGen, four real-world datasets were used to perform an experiment. The experimental results show that, compared with SDGen, the compression performance and compression ratio of the datasets generated by TextGen deviate less from real-world datasets when end-tagged dense code, a representative of word-based compressors, is evaluated.