Based on spatio-temporal correlativity analysis method, the automatic identification techniques for data anomaly monitoring of coal mining working face gas are presented. The asynchronous correlative characteristics o...Based on spatio-temporal correlativity analysis method, the automatic identification techniques for data anomaly monitoring of coal mining working face gas are presented. The asynchronous correlative characteristics of gas migration in working face airflow direction are qualitatively analyzed. The calculation method of asynchronous correlation delay step and the prediction and inversion formulas of gas concentration changing with time and space after gas emission in the air return roadway are provided. By calculating one hundred and fifty groups of gas sensors data series from a coal mine which have the theoretical correlativity, the correlative coefficient values range of eight kinds of data anomaly is obtained. Then the gas moni- toring data anomaly identification algorithm based on spatio-temporal correlativity analysis is accordingly presented. In order to improve the efficiency of analysis, the gas sensors code rules which can express the spatial topological relations are sug- gested. The experiments indicate that methods presented in this article can effectively compensate the defects of methods based on a single gas sensor monitoring data.展开更多
Studies in the coastal area of Bohai Bay,China,from July 2006 to October 2007,suggest that the method of meiofaunal biomass estimation affected the meiofaunal analysis.Conventional estimation methods that use a unique...Studies in the coastal area of Bohai Bay,China,from July 2006 to October 2007,suggest that the method of meiofaunal biomass estimation affected the meiofaunal analysis.Conventional estimation methods that use a unique mean individual weight value for nematodes to calculate total biomass may cause deviation of the results.A modified estimation method,named the Subsection Count Method (SCM),was also used to calculate meiofaunal biomass.This entails only a slight increase in workload but generates results of greater accuracy.Results gained using each of these two methods were compared in the present study.The results show that the conventional method generally estimates a meiofaunal biomass.The difference between the two estimation methods was highly significant (P<0.01) for the spring and winter cruises.Furthermore,the estimation method for meiofaunal biomass affected the analysis of horizontal distribution and correlation with environmental factors.These findings highlight the importance of estimation methods for meiofaunal biomass and will hopefully stimulate further investigation and discussion of the topic.展开更多
The modeling of volatility and correlation is important in order to calculate hedge ratios, value at risk estimates, CAPM (Capital Asset Pricing Model betas), derivate pricing and risk management in general. Recent ...The modeling of volatility and correlation is important in order to calculate hedge ratios, value at risk estimates, CAPM (Capital Asset Pricing Model betas), derivate pricing and risk management in general. Recent access to intra-daily high-frequency data for two of the most liquid contracts at the Nord Pool exchange has made it possible to apply new and promising methods for analyzing volatility and correlation. The concepts of realized volatility and realized correlation are applied, and this study statistically describes the distribution (both distributional properties and temporal dependencies) of electricity forward data from 2005 to 2009. The main findings show that the logarithmic realized volatility is approximately normally distributed, while realized correlation seems not to be. Further, realized volatility and realized correlation have a long-memory feature. There also seems to be a high correlation between realized correlation and volatilities and positive relations between trading volume and realized volatility and between trading volume and realized correlation. These results are to a large extent consistent with earlier studies of stylized facts of other financial and commodity markets.展开更多
After a composite service is deployed, user privacy requirements and trust levels of component services are subject to variation. When the changes occur, it is critical to preserve privacy information flow security. W...After a composite service is deployed, user privacy requirements and trust levels of component services are subject to variation. When the changes occur, it is critical to preserve privacy information flow security. We propose an approach to preserve privacy information flow security in composite service evolution. First, a privacy data item dependency analysis method based on a Petri net model is presented. Then the set of privacy data items collected by each component service is derived through a privacy data item dependency graph, and the security scope of each component service is calculated. Finally, the evolution operations that preserve privacy information flow security are defined. By applying these evolution operations, the re-verification process is avoided and the evolution efficiency is improved. To illustrate the effectiveness of our approach, a case study is presented. The experimental results indicate that our approach has high evolution efficiency and can greatly reduce the cost of evolution compared with re-verifying the entire composite service.展开更多
Association analysis provides an opportunity to find genetic variants underlying complex traits. A principal components regression (PCR)-based approach was shown to outperform some competing approaches. However, a l...Association analysis provides an opportunity to find genetic variants underlying complex traits. A principal components regression (PCR)-based approach was shown to outperform some competing approaches. However, a limitation of this method is that the principal components (PCs) selected from single nucleotide polyrnorphisms (SNPs) may be unrelated to the phenotype. In this article, we investigate the theoretical properties of such a method in more detail. We first derive the exact power function of the test based on PCR, and hence clarify the relationship between the test power and the degrees of freedom (DF). Next, we extend the PCR test to a general weighted PCs test, which provides a unified framework for understanding the properties of some related statistics. We then compare the performance of these tests. We also introduce several data-driven adaptive alternatives to overcome difficulties in the PCR approach. Finally, we illustrate our results using simulations based on real genotype data. Simulation study shows the risk of using the unsupervised rule to determine the number of PCs, and demonstrates that there is no single uniformly powerful method for detecting genetic variants.展开更多
基金Supported by the National Natural Science Foundation of China (40971275, 50811120111)
文摘Based on spatio-temporal correlativity analysis method, the automatic identification techniques for data anomaly monitoring of coal mining working face gas are presented. The asynchronous correlative characteristics of gas migration in working face airflow direction are qualitatively analyzed. The calculation method of asynchronous correlation delay step and the prediction and inversion formulas of gas concentration changing with time and space after gas emission in the air return roadway are provided. By calculating one hundred and fifty groups of gas sensors data series from a coal mine which have the theoretical correlativity, the correlative coefficient values range of eight kinds of data anomaly is obtained. Then the gas moni- toring data anomaly identification algorithm based on spatio-temporal correlativity analysis is accordingly presented. In order to improve the efficiency of analysis, the gas sensors code rules which can express the spatial topological relations are sug- gested. The experiments indicate that methods presented in this article can effectively compensate the defects of methods based on a single gas sensor monitoring data.
基金Supported by Chinese Offshore Investigation and Assessment Project (No. 908-TJ-10,908-TJ-09)the Initial Fund for Introduced Talent of Tianjin University of Science and Technology (No. 20090413)
文摘Studies in the coastal area of Bohai Bay,China,from July 2006 to October 2007,suggest that the method of meiofaunal biomass estimation affected the meiofaunal analysis.Conventional estimation methods that use a unique mean individual weight value for nematodes to calculate total biomass may cause deviation of the results.A modified estimation method,named the Subsection Count Method (SCM),was also used to calculate meiofaunal biomass.This entails only a slight increase in workload but generates results of greater accuracy.Results gained using each of these two methods were compared in the present study.The results show that the conventional method generally estimates a meiofaunal biomass.The difference between the two estimation methods was highly significant (P<0.01) for the spring and winter cruises.Furthermore,the estimation method for meiofaunal biomass affected the analysis of horizontal distribution and correlation with environmental factors.These findings highlight the importance of estimation methods for meiofaunal biomass and will hopefully stimulate further investigation and discussion of the topic.
文摘The modeling of volatility and correlation is important in order to calculate hedge ratios, value at risk estimates, CAPM (Capital Asset Pricing Model betas), derivate pricing and risk management in general. Recent access to intra-daily high-frequency data for two of the most liquid contracts at the Nord Pool exchange has made it possible to apply new and promising methods for analyzing volatility and correlation. The concepts of realized volatility and realized correlation are applied, and this study statistically describes the distribution (both distributional properties and temporal dependencies) of electricity forward data from 2005 to 2009. The main findings show that the logarithmic realized volatility is approximately normally distributed, while realized correlation seems not to be. Further, realized volatility and realized correlation have a long-memory feature. There also seems to be a high correlation between realized correlation and volatilities and positive relations between trading volume and realized volatility and between trading volume and realized correlation. These results are to a large extent consistent with earlier studies of stylized facts of other financial and commodity markets.
基金Project supported by the National Natural Science Foundation of China(Nos.61562087 and 61772270)the National High-Tech R&D Program(863)of China(No.2015AA015303)+2 种基金the Natural Science Foundation of Jiangsu Province,China(No.BK20130735)the Universities Natural Science Foundation of Jiangsu Province,China(No.13KJB520011)the Science Foundation of Nanjing Institute of Technology,China(No.YKJ201420)
文摘After a composite service is deployed, user privacy requirements and trust levels of component services are subject to variation. When the changes occur, it is critical to preserve privacy information flow security. We propose an approach to preserve privacy information flow security in composite service evolution. First, a privacy data item dependency analysis method based on a Petri net model is presented. Then the set of privacy data items collected by each component service is derived through a privacy data item dependency graph, and the security scope of each component service is calculated. Finally, the evolution operations that preserve privacy information flow security are defined. By applying these evolution operations, the re-verification process is avoided and the evolution efficiency is improved. To illustrate the effectiveness of our approach, a case study is presented. The experimental results indicate that our approach has high evolution efficiency and can greatly reduce the cost of evolution compared with re-verifying the entire composite service.
基金supported by the National Basic Research Program (973) of China (No. 2004CB117306)the Hi-Tech Research and Development Program (863) of China (No. 2006AA10A102)
文摘Association analysis provides an opportunity to find genetic variants underlying complex traits. A principal components regression (PCR)-based approach was shown to outperform some competing approaches. However, a limitation of this method is that the principal components (PCs) selected from single nucleotide polyrnorphisms (SNPs) may be unrelated to the phenotype. In this article, we investigate the theoretical properties of such a method in more detail. We first derive the exact power function of the test based on PCR, and hence clarify the relationship between the test power and the degrees of freedom (DF). Next, we extend the PCR test to a general weighted PCs test, which provides a unified framework for understanding the properties of some related statistics. We then compare the performance of these tests. We also introduce several data-driven adaptive alternatives to overcome difficulties in the PCR approach. Finally, we illustrate our results using simulations based on real genotype data. Simulation study shows the risk of using the unsupervised rule to determine the number of PCs, and demonstrates that there is no single uniformly powerful method for detecting genetic variants.