In order to study fracture mechanism of rocks in different brittle mineral contents,this study pro-poses a method to identify the acoustic emission signal released by rock fracture under different brittle miner-al con...In order to study fracture mechanism of rocks in different brittle mineral contents,this study pro-poses a method to identify the acoustic emission signal released by rock fracture under different brittle miner-al content(BMC),and then determine the content of brittle matter in rock.To understand related interference such as the noises in the acoustic emission signals released by the rock mass rupture,a 1DCNN-BLSTM network model with SE module is constructed in this study.The signal data is processed through the 1DCNN and BLSTM networks to fully extract the time-series correlation features of the signals,the non-correlated features of the local space and the weak periodicity law.Furthermore,the processed signals data is input into the fully connected layers.Finally,softmax function is used to accurately identify the acoustic emission signals released by different rocks,and then determine the content of brittle minerals contained in rocks.Through experimental comparison and analysis,1DCNN-BLSTM model embedded with SE module has good anti-noise performance,and the recognition accuracy can reach more than 90 percent,which is better than the traditional deep network models and provides a new way of thinking for rock acoustic emission re-search.展开更多
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India...Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.展开更多
基金Supported by projects of the National Natural Science Foundation of China(Nos.52074088,52174022,51574088,51404073)Provincial Outstanding Youth Reserve Talent Project of Northeast Petroleum University(No.SJQH202002)+1 种基金2020 Northeast Petroleum University Western Oilfield Development Special Project(No.XBYTKT202001)Postdoctoral Research Start-Up in Heilongjiang Province(Nos.LBH-Q20074,LBH-Q21086).
文摘In order to study fracture mechanism of rocks in different brittle mineral contents,this study pro-poses a method to identify the acoustic emission signal released by rock fracture under different brittle miner-al content(BMC),and then determine the content of brittle matter in rock.To understand related interference such as the noises in the acoustic emission signals released by the rock mass rupture,a 1DCNN-BLSTM network model with SE module is constructed in this study.The signal data is processed through the 1DCNN and BLSTM networks to fully extract the time-series correlation features of the signals,the non-correlated features of the local space and the weak periodicity law.Furthermore,the processed signals data is input into the fully connected layers.Finally,softmax function is used to accurately identify the acoustic emission signals released by different rocks,and then determine the content of brittle minerals contained in rocks.Through experimental comparison and analysis,1DCNN-BLSTM model embedded with SE module has good anti-noise performance,and the recognition accuracy can reach more than 90 percent,which is better than the traditional deep network models and provides a new way of thinking for rock acoustic emission re-search.
文摘Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.