The filter operator used in normal multichannel matching filter is physically realizable. This filter operator only delays seismic data in the filtering process. A non- causal multichannel matching filter based on a l...The filter operator used in normal multichannel matching filter is physically realizable. This filter operator only delays seismic data in the filtering process. A non- causal multichannel matching filter based on a least squares criterion is proposed to resolve the problem in which predicted multiple model data is later than real data. The differences between causal and non-causal multichannel matching filters are compared using a synthetic shot gather, which demonstrates the validity of the non-causal matching filter. In addition, a variable length sliding window which changes with offset and layer velocity is proposed to solve the count of events increasing with increasing offset in a fixed length sliding window. This variable length sliding window is also introduced into the modified and expanded multichannel matching filter. This method is applied to the Pluto1.5 synthetic data set. The benefits of the non-causal filter operator and variable length sliding window are demonstrated by the good multiple attenuation result.展开更多
Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the sett...Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。展开更多
Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however...Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however,the predicted results have uncertainty in providing accurate informa-tion to users because of the“black box”problem in the deep learning model.Therefore,this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification.The visual repre-sentative gradient-weights class activation mapping(Grad-CAM)method is adopted to provide visually explainable information.A visualizing evaluation index is proposed to quantitatively analyze visual representations;this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects.In addition,an ablation study,adopting three-branch combinations with the VGG16,is implemented to identify perfor-mance variations by visualizing predicted results.Experiments reveal that the proposed model,combined with hybrid pooling,batch normalization,and multi-attention modules,achieves the best performance with an accuracy of 97.77%,corresponding to an improvement of 2.49%compared with the baseline model.Consequently,this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models.展开更多
In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows ...In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows that there is a shortcoming in the power of mechanical explanations; (ii) that mechanical explanations offer only sufficient, not necessary explanations, and this too was taken by Boyle to be a limit in the explanatory power of mechanical explanations; (iii) that the mature Boyle thought that there could be more intelligible explanatory models than mechanism; and (iv) that what Boyle says at any point in his career is incompatible with the statement of Maria Boas-Hall, i.e., that the mechanical hypothesis can explicate all natural phenomena. Since all four of these claims are part of Eaton's developmental argument, my rejection of them will not only show how the particular developmental story Eaton diagnoses is inaccurate, but will also explain what limits there actually are in Boyle's account of the intelligibility of mechanical explanations. My account will also show why important philosophers like Locke and Leibniz should be interested in Boyle's philosophical work.展开更多
This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-...This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-level grammar course were transcribed to represent the speech, gesture and other non-verbal behavior that accompanied unplanned explanations of vocabulary that arose during three focus-on-form lessons. The gesture classification system of McNeill (1992), which delineates different types of hand movements (iconics metaphorics, deictics, beats), was used to understand the role the gestures played in these explanations. Results suggest that gestures and other non-verbal behavior are forms of input to classroom second language learners that must be considered a salient factor in classroom-based SLA (Second Language Acquisition) research展开更多
Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the e...Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted,we propose a Fine-Grained Visual Explanation for CNN,namely F-GVE,which produces a fine-grained explanation with higher consistency to the decision of the original model.The exact backward class-specific gradients with respect to the input image is obtained to highlight the object-related pixels the model used to make prediction.In addition,for better visualization and less noise,F-GVE selects an appropriate threshold to filter the gradient during the calculation and the explanation map is obtained by element-wise multiplying the gradient and the input image to show fine-grained classification decision features.Experimental results demonstrate that F-GVE has good visual performances and highlights the importance of fine-grained decision features.Moreover,the faithfulness of the explanation in this paper is high and it is effective and practical on troubleshooting and debugging detection.展开更多
The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow...The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow regimes data of other GLCC positions from other literatures in existence,the gas and liquid superficial velocities and pressure drops are used as the input of the machine learning algorithms respectively which are applied to identify the flow regimes.The choosing of input data types takes the availability of data for practical industry fields into consideration,and the twelve machine learning algorithms are chosen from the classical and popular algorithms in the area of classification,including the typical ensemble models,SVM,KNN,Bayesian Model and MLP.The results of flow regimes identification show that gas and liquid superficial velocities are the ideal type of input data for the flow regimes identification by machine learning.Most of the ensemble models can identify the flow regimes of GLCC by gas and liquid velocities with the accuracy of 0.99 and more.For the pressure drops as the input of each algorithm,it is not the suitable as gas and liquid velocities,and only XGBoost and Bagging Tree can identify the GLCC flow regimes accurately.The success and confusion of each algorithm are analyzed and explained based on the experimental phenomena of flow regimes evolution processes,the flow regimes map,and the principles of algorithms.The applicability and feasibility of each algorithm according to different types of data for GLCC flow regimes identification are proposed.展开更多
基金supported by the National 863 Program (Grant No. 2006AA09A102-09)the National 973 Program (GrantNo. 2007CB209606)
文摘The filter operator used in normal multichannel matching filter is physically realizable. This filter operator only delays seismic data in the filtering process. A non- causal multichannel matching filter based on a least squares criterion is proposed to resolve the problem in which predicted multiple model data is later than real data. The differences between causal and non-causal multichannel matching filters are compared using a synthetic shot gather, which demonstrates the validity of the non-causal matching filter. In addition, a variable length sliding window which changes with offset and layer velocity is proposed to solve the count of events increasing with increasing offset in a fixed length sliding window. This variable length sliding window is also introduced into the modified and expanded multichannel matching filter. This method is applied to the Pluto1.5 synthetic data set. The benefits of the non-causal filter operator and variable length sliding window are demonstrated by the good multiple attenuation result.
基金support provided by The Science and Technology Development Fund,Macao SAR,China(File Nos.0057/2020/AGJ and SKL-IOTSC-2021-2023)Science and Technology Program of Guangdong Province,China(Grant No.2021A0505080009).
文摘Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。
基金supported by a Korea Agency for Infrastructure Technology Advancement(KAIA)grant funded by the Ministry of Land,Infrastructure,and Transport(Grant 22CTAP-C163951-02).
文摘Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however,the predicted results have uncertainty in providing accurate informa-tion to users because of the“black box”problem in the deep learning model.Therefore,this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification.The visual repre-sentative gradient-weights class activation mapping(Grad-CAM)method is adopted to provide visually explainable information.A visualizing evaluation index is proposed to quantitatively analyze visual representations;this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects.In addition,an ablation study,adopting three-branch combinations with the VGG16,is implemented to identify perfor-mance variations by visualizing predicted results.Experiments reveal that the proposed model,combined with hybrid pooling,batch normalization,and multi-attention modules,achieves the best performance with an accuracy of 97.77%,corresponding to an improvement of 2.49%compared with the baseline model.Consequently,this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models.
文摘In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows that there is a shortcoming in the power of mechanical explanations; (ii) that mechanical explanations offer only sufficient, not necessary explanations, and this too was taken by Boyle to be a limit in the explanatory power of mechanical explanations; (iii) that the mature Boyle thought that there could be more intelligible explanatory models than mechanism; and (iv) that what Boyle says at any point in his career is incompatible with the statement of Maria Boas-Hall, i.e., that the mechanical hypothesis can explicate all natural phenomena. Since all four of these claims are part of Eaton's developmental argument, my rejection of them will not only show how the particular developmental story Eaton diagnoses is inaccurate, but will also explain what limits there actually are in Boyle's account of the intelligibility of mechanical explanations. My account will also show why important philosophers like Locke and Leibniz should be interested in Boyle's philosophical work.
文摘This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-level grammar course were transcribed to represent the speech, gesture and other non-verbal behavior that accompanied unplanned explanations of vocabulary that arose during three focus-on-form lessons. The gesture classification system of McNeill (1992), which delineates different types of hand movements (iconics metaphorics, deictics, beats), was used to understand the role the gestures played in these explanations. Results suggest that gestures and other non-verbal behavior are forms of input to classroom second language learners that must be considered a salient factor in classroom-based SLA (Second Language Acquisition) research
基金This work was partially supported by Beijing Natural Science Foundation(No.4222038)by Open Research Project of the State Key Laboratory of Media Convergence and Communication(Communication University of China),by the National Key RD Program of China(No.2021YFF0307600)and by Fundamental Research Funds for the Central Universities.
文摘Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted,we propose a Fine-Grained Visual Explanation for CNN,namely F-GVE,which produces a fine-grained explanation with higher consistency to the decision of the original model.The exact backward class-specific gradients with respect to the input image is obtained to highlight the object-related pixels the model used to make prediction.In addition,for better visualization and less noise,F-GVE selects an appropriate threshold to filter the gradient during the calculation and the explanation map is obtained by element-wise multiplying the gradient and the input image to show fine-grained classification decision features.Experimental results demonstrate that F-GVE has good visual performances and highlights the importance of fine-grained decision features.Moreover,the faithfulness of the explanation in this paper is high and it is effective and practical on troubleshooting and debugging detection.
文摘The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow regimes data of other GLCC positions from other literatures in existence,the gas and liquid superficial velocities and pressure drops are used as the input of the machine learning algorithms respectively which are applied to identify the flow regimes.The choosing of input data types takes the availability of data for practical industry fields into consideration,and the twelve machine learning algorithms are chosen from the classical and popular algorithms in the area of classification,including the typical ensemble models,SVM,KNN,Bayesian Model and MLP.The results of flow regimes identification show that gas and liquid superficial velocities are the ideal type of input data for the flow regimes identification by machine learning.Most of the ensemble models can identify the flow regimes of GLCC by gas and liquid velocities with the accuracy of 0.99 and more.For the pressure drops as the input of each algorithm,it is not the suitable as gas and liquid velocities,and only XGBoost and Bagging Tree can identify the GLCC flow regimes accurately.The success and confusion of each algorithm are analyzed and explained based on the experimental phenomena of flow regimes evolution processes,the flow regimes map,and the principles of algorithms.The applicability and feasibility of each algorithm according to different types of data for GLCC flow regimes identification are proposed.