In this paper, we demonstrate the high resolution seismic reflection data for a depth range of several hundred meters across the Fenhe fault in Taiyuan city, China. In combination with the relevant borehole logs, thes...In this paper, we demonstrate the high resolution seismic reflection data for a depth range of several hundred meters across the Fenhe fault in Taiyuan city, China. In combination with the relevant borehole logs, these data provide useful constraints on the accurate position, geometry and deformation rate of the fault, as well as the kinematics of recent fault motion. The high resolution seismic reflection profiling revealed that the western branch of the Fenhe fault is a high angle, eastward dipping, oblique normal fault, and cutting up to the lower part of the Quaternary system. It was revealed that the top breaking point of this fault is at a depth of ~70m below the ground surface. A borehole log across the Fenhe fault permitted us to infer that there are two high angle, oppositely dipping, oblique normal faults. The eastem branch lies beneath the eastern embankment of the Fenhe river, dipping to the west and cutting into the Holocene late Pleistocene strata with a maximum vertical offset of ~8m. Another borehole log across the northern segment of the Fenhe fault indicates that the western branch of this fault has cut into the Holocene late Pleistocene strata with a maximum vertical offset of ~6m. The above mentioned data provide a minimum average Pleistocene Holocene vertical slip rate of 0 06~0 08mm/a and a maximum average large earthquake recurrence interval of 5 0~6 7ka for the Fenhe fault.展开更多
A novel approach named aligned mixture probabilistic principal component analysis(AMPPCA) is proposed in this study for fault detection of multimode chemical processes. In order to exploit within-mode correlations,the...A novel approach named aligned mixture probabilistic principal component analysis(AMPPCA) is proposed in this study for fault detection of multimode chemical processes. In order to exploit within-mode correlations,the AMPPCA algorithm first estimates a statistical description for each operating mode by applying mixture probabilistic principal component analysis(MPPCA). As a comparison, the combined MPPCA is employed where monitoring results are softly integrated according to posterior probabilities of the test sample in each local model. For exploiting the cross-mode correlations, which may be useful but are inadvertently neglected due to separately held monitoring approaches, a global monitoring model is constructed by aligning all local models together. In this way, both within-mode and cross-mode correlations are preserved in this integrated space. Finally, the utility and feasibility of AMPPCA are demonstrated through a non-isothermal continuous stirred tank reactor and the TE benchmark process.展开更多
The present analysis was performed to obtain bearing strength for pinned joints in uni-directional graphite epoxy composite laminates using characteristic curve model. The characteristic dimensions used to determine t...The present analysis was performed to obtain bearing strength for pinned joints in uni-directional graphite epoxy composite laminates using characteristic curve model. The characteristic dimensions used to determine the characteristic curve were evaluated using a two-dimensional finite element model that was developed in ANSYS14.5 Software. Also, two-dimensional finite element stress analysis was developed to determine the stress distribution needed to evaluate the failure. Tsai-Wu failure criterion was used in the analysis with the characteristic curve to predict bearing strength. The results of the analysis showed good agreement with experimental data.展开更多
In machine learning and statistics, classification is the a new observation belongs, on the basis of a training set of data problem of identifying to which of a set of categories (sub-populations) containing observa...In machine learning and statistics, classification is the a new observation belongs, on the basis of a training set of data problem of identifying to which of a set of categories (sub-populations) containing observations (or instances) whose category membership is known. SVM (support vector machines) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes fon^as the output, making it a non-probabilistic binary linear classifier. In pattern recognition problem, the selection of the features used for characterization an object to be classified is importance. Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function, impticitly perform a nonlinear mapping 4~ of the input data in Rainto a high-dimensional feature space H. Cover's theorem states that if the transformation is nonlinear and the dimensionality of the feature space is high enough, then the input space may be transformed into a new feature space where the patterns are linearly separable with high probability.展开更多
With progression of the digital age, the complexity of software continues to grow. AS a result, methods to quantitatively assess characteristics of software have attracted significant atten- tion. These efforts have l...With progression of the digital age, the complexity of software continues to grow. AS a result, methods to quantitatively assess characteristics of software have attracted significant atten- tion. These efforts have led to a large number of new measures such as coupling metrics, many of which seek to consider the impact of correlations between components and failures on ap- plication reliability. However, most of these approaches set the coupling parameters arbitrarily by making assumptions instead of utilizing experimental data and therefore may not accurately capture actual coupling between components of software applica- tion. Since the coupling matrix is often set arbitrarily, the existing approaches to assess software reliability considering component correlation fail to reflect the real degree of interaction and rela- tionships among software components. This paper presents an efficient approach to assess the software reliability considering Correlated component failures, incorporating software architec- ture while considering actual internal coupling of software with an efficient approach based on multivariate Bernoulli (MVB) distribu- tion. The unified framework for software Coupling measurement is' informed by a comprehensive survey of frameworks for object- oriented and procedure-oriented software. This framework enables the extraction of more accurate coupling among cornponents. The effectiveness of this method is illustrated through an exPerimental study bylapplying it to a real-time software application.展开更多
文摘In this paper, we demonstrate the high resolution seismic reflection data for a depth range of several hundred meters across the Fenhe fault in Taiyuan city, China. In combination with the relevant borehole logs, these data provide useful constraints on the accurate position, geometry and deformation rate of the fault, as well as the kinematics of recent fault motion. The high resolution seismic reflection profiling revealed that the western branch of the Fenhe fault is a high angle, eastward dipping, oblique normal fault, and cutting up to the lower part of the Quaternary system. It was revealed that the top breaking point of this fault is at a depth of ~70m below the ground surface. A borehole log across the Fenhe fault permitted us to infer that there are two high angle, oppositely dipping, oblique normal faults. The eastem branch lies beneath the eastern embankment of the Fenhe river, dipping to the west and cutting into the Holocene late Pleistocene strata with a maximum vertical offset of ~8m. Another borehole log across the northern segment of the Fenhe fault indicates that the western branch of this fault has cut into the Holocene late Pleistocene strata with a maximum vertical offset of ~6m. The above mentioned data provide a minimum average Pleistocene Holocene vertical slip rate of 0 06~0 08mm/a and a maximum average large earthquake recurrence interval of 5 0~6 7ka for the Fenhe fault.
基金Supported by the National Natural Science Foundation of China(61374140)Shanghai Pujiang Program(12PJ1402200)
文摘A novel approach named aligned mixture probabilistic principal component analysis(AMPPCA) is proposed in this study for fault detection of multimode chemical processes. In order to exploit within-mode correlations,the AMPPCA algorithm first estimates a statistical description for each operating mode by applying mixture probabilistic principal component analysis(MPPCA). As a comparison, the combined MPPCA is employed where monitoring results are softly integrated according to posterior probabilities of the test sample in each local model. For exploiting the cross-mode correlations, which may be useful but are inadvertently neglected due to separately held monitoring approaches, a global monitoring model is constructed by aligning all local models together. In this way, both within-mode and cross-mode correlations are preserved in this integrated space. Finally, the utility and feasibility of AMPPCA are demonstrated through a non-isothermal continuous stirred tank reactor and the TE benchmark process.
文摘The present analysis was performed to obtain bearing strength for pinned joints in uni-directional graphite epoxy composite laminates using characteristic curve model. The characteristic dimensions used to determine the characteristic curve were evaluated using a two-dimensional finite element model that was developed in ANSYS14.5 Software. Also, two-dimensional finite element stress analysis was developed to determine the stress distribution needed to evaluate the failure. Tsai-Wu failure criterion was used in the analysis with the characteristic curve to predict bearing strength. The results of the analysis showed good agreement with experimental data.
文摘In machine learning and statistics, classification is the a new observation belongs, on the basis of a training set of data problem of identifying to which of a set of categories (sub-populations) containing observations (or instances) whose category membership is known. SVM (support vector machines) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes fon^as the output, making it a non-probabilistic binary linear classifier. In pattern recognition problem, the selection of the features used for characterization an object to be classified is importance. Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function, impticitly perform a nonlinear mapping 4~ of the input data in Rainto a high-dimensional feature space H. Cover's theorem states that if the transformation is nonlinear and the dimensionality of the feature space is high enough, then the input space may be transformed into a new feature space where the patterns are linearly separable with high probability.
基金supported by the National Aerospace Science Foundation of China(20140751008)
文摘With progression of the digital age, the complexity of software continues to grow. AS a result, methods to quantitatively assess characteristics of software have attracted significant atten- tion. These efforts have led to a large number of new measures such as coupling metrics, many of which seek to consider the impact of correlations between components and failures on ap- plication reliability. However, most of these approaches set the coupling parameters arbitrarily by making assumptions instead of utilizing experimental data and therefore may not accurately capture actual coupling between components of software applica- tion. Since the coupling matrix is often set arbitrarily, the existing approaches to assess software reliability considering component correlation fail to reflect the real degree of interaction and rela- tionships among software components. This paper presents an efficient approach to assess the software reliability considering Correlated component failures, incorporating software architec- ture while considering actual internal coupling of software with an efficient approach based on multivariate Bernoulli (MVB) distribu- tion. The unified framework for software Coupling measurement is' informed by a comprehensive survey of frameworks for object- oriented and procedure-oriented software. This framework enables the extraction of more accurate coupling among cornponents. The effectiveness of this method is illustrated through an exPerimental study bylapplying it to a real-time software application.